Waktaverse-Llama-3-KO-8B-Instruct Model Card

Model Details

image/webp Waktaverse-Llama-3-KO-8B-Instruct is a Korean language model developed by Waktaverse AI team. This large language model is a specialized version of the Meta-Llama-3-8B-Instruct, tailored for Korean natural language processing tasks. It is designed to handle a variety of complex instructions and generate coherent, contextually appropriate responses.

Model Sources

  • Repository: GitHub
  • Paper : [More Information Needed]

Uses

Direct Use

The model can be utilized directly for tasks such as text completion, summarization, and question answering without any fine-tuning.

Out-of-Scope Use

This model is not intended for use in scenarios that involve high-stakes decision-making including medical, legal, or safety-critical areas due to the potential risks of relying on automated decision-making. Moreover, any attempt to deploy the model in a manner that infringes upon privacy rights or facilitates biased decision-making is strongly discouraged.

Bias, Risks, and Limitations

While Waktaverse Llama 3 is a robust model, it shares common limitations associated with machine learning models including potential biases in training data, vulnerability to adversarial attacks, and unpredictable behavior under edge cases. There is also a risk of cultural and contextual misunderstanding, particularly when the model is applied to languages and contexts it was not specifically trained on.

How to Get Started with the Model

You can run conversational inference using the Transformers Auto classes. We highly recommend that you add Korean system prompt for better output. Adjust the hyperparameters as you need.

Example Usage

import torch
from transformers import AutoTokenizer, AutoModelForCausalLM

device = (
    "cuda:0" if torch.cuda.is_available() else # Nvidia GPU
    "mps" if torch.backends.mps.is_available() else # Apple Silicon GPU
    "cpu"
)

model_id = "PathFinderKR/Waktaverse-Llama-3-KO-8B-Instruct"

tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
    model_id,
    torch_dtype=torch.bfloat16,
    device_map=device,
)

################################################################################
# Generation parameters
################################################################################
num_return_sequences=1
max_new_tokens=1024
temperature=0.6
top_p=0.9
repetition_penalty=1.1

def prompt_template(system, user):
    return (
        "<|begin_of_text|><|start_header_id|>system<|end_header_id|>\n\n"
        f"{system}<|eot_id|>"
        
        "<|start_header_id|>user<|end_header_id|>\n\n"
        f"{user}<|eot_id|>"
        
        "<|start_header_id|>assistant<|end_header_id|>\n\n"
    )

def generate_response(system ,user):
    prompt = prompt_template(system, user)
    
    input_ids = tokenizer.encode(
        prompt,
        add_special_tokens=False,
        return_tensors="pt"
    ).to(device)
    
    outputs = model.generate(
        input_ids=input_ids,
        pad_token_id=tokenizer.eos_token_id,
        num_return_sequences=num_return_sequences,
        max_new_tokens=max_new_tokens,
        do_sample=True,
        temperature=temperature,
        top_p=top_p,
        repetition_penalty=repetition_penalty
    )
    
    return tokenizer.decode(outputs[0], skip_special_tokens=False)

system_prompt = "λ‹€μŒ μ§€μ‹œμ‚¬ν•­μ— λŒ€ν•œ 응닡을 μž‘μ„±ν•΄μ£Όμ„Έμš”."
user_prompt = "ν”Όλ³΄λ‚˜μΉ˜ μˆ˜μ—΄μ— λŒ€ν•΄ μ„€λͺ…ν•΄μ£Όμ„Έμš”."
response = generate_response(system_prompt, user_prompt)
print(response)

Example Output

<|begin_of_text|><|start_header_id|>system<|end_header_id|>

λ‹€μŒ μ§€μ‹œμ‚¬ν•­μ— λŒ€ν•œ 응닡을 μž‘μ„±ν•΄ μ£Όμ„Έμš”.<|eot_id|><|start_header_id|>user<|end_header_id|>

ν”Όλ³΄λ‚˜μΉ˜ μˆ˜μ—΄μ— λŒ€ν•΄ μ„€λͺ…ν•΄μ£Όμ„Έμš”.<|eot_id|><|start_header_id|>assistant<|end_header_id|>

ν”Όλ³΄λ‚˜μΉ˜ μˆ˜μ—΄μ€ μˆ˜ν•™μ—μ„œ 자주 μ‚¬μš©λ˜λŠ” μˆ˜μ—΄ 쀑 ν•˜λ‚˜λ‘œ, 0κ³Ό 1둜 μ‹œμž‘ν•˜μ—¬ λ‹€μŒ 항이 이전 두 ν•­μ˜ ν•©μœΌλ‘œ κ΅¬μ„±λ©λ‹ˆλ‹€. ν”Όλ³΄λ‚˜μΉ˜ μˆ˜μ—΄μ€ 유λͺ…ν•œ μˆ˜ν•™μž 레온 μ•Œλ ‰μ‚°λ“œλ‘œλΉ„μΉ˜ ν”Όλ³΄λ‚˜μΉ˜κ°€ μ œμ•ˆν–ˆμœΌλ©°, 그의 이름을 λ”°μ„œ λͺ…λͺ…λ˜μ—ˆμŠ΅λ‹ˆλ‹€. 이 μˆ˜μ—΄μ€ μžμ—°μˆ˜μ™€ μ •μˆ˜λ₯Ό ν¬ν•¨ν•˜λ©°, 각 항은 이전 두 ν•­μ˜ ν•©μž…λ‹ˆλ‹€. 예λ₯Ό λ“€μ–΄, 첫 번째 항은 0이고 두 번째 항은 1이며, μ„Έ 번째 항은 2이고 λ„€ 번째 항은 3μž…λ‹ˆλ‹€. ν”Όλ³΄λ‚˜μΉ˜ μˆ˜μ—΄μ€ 순차적으둜 μ¦κ°€ν•˜λŠ” νŠΉμ§•μ΄ μžˆμ§€λ§Œ, μˆ«μžκ°€ 컀질수둝 점점 더 λΉ λ₯΄κ²Œ μ¦κ°€ν•©λ‹ˆλ‹€. ν”Όλ³΄λ‚˜μΉ˜ μˆ˜μ—΄μ€ λ‹€μ–‘ν•œ λΆ„μ•Όμ—μ„œ μ‚¬μš©λ˜λ©°, μˆ˜ν•™, 컴퓨터 κ³Όν•™, 생물학 λ“±μ—μ„œ μ€‘μš”ν•œ 역할을 ν•©λ‹ˆλ‹€.<|eot_id|>

Training Details

Training Data

The model is trained on the MarkrAI/KoCommercial-Dataset, which consists of various commercial texts in Korean.

Training Procedure

The model training used LoRA for computational efficiency. 0.04 billion parameters(0.51% of total parameters) were trained.

Training Hyperparameters

################################################################################
# bitsandbytes parameters
################################################################################
load_in_4bit=True
bnb_4bit_compute_dtype=torch.bfloat16
bnb_4bit_quant_type="nf4"
bnb_4bit_use_double_quant=True

################################################################################
# LoRA parameters
################################################################################
task_type="CAUSAL_LM"
target_modules=["q_proj", "k_proj", "v_proj", "o_proj", "gate_proj", "up_proj", "down_proj"]
r=8
lora_alpha=16
lora_dropout=0.05
bias="none"

################################################################################
# TrainingArguments parameters
################################################################################
num_train_epochs=1
per_device_train_batch_size=1
gradient_accumulation_steps=1
gradient_checkpointing=True
learning_rate=2e-5
lr_scheduler_type="cosine"
warmup_ratio=0.1
optim = "paged_adamw_32bit"
weight_decay=0.01

################################################################################
# SFT parameters
################################################################################
max_seq_length=4096
packing=False

Evaluation

Metrics

  • Ko-HellaSwag:
  • Ko-MMLU:
  • Ko-Arc:
  • Ko-Truthful QA:
  • Ko-CommonGen V2:

Results

Benchmark Waktaverse Llama 3 8B Llama 3 8B
Ko-HellaSwag: 0 0
Ko-MMLU: 0 0
Ko-Arc: 0 0
Ko-Truthful QA: 0 0
Ko-CommonGen V2: 0 0

Technical Specifications

Compute Infrastructure

Hardware

  • GPU: NVIDIA GeForce RTX 4080 SUPER

Software

  • Operating System: Linux
  • Deep Learning Framework: Hugging Face Transformers, PyTorch

Training Details

Citation

Waktaverse-Llama-3

@article{waktaversellama3modelcard,
  title={Waktaverse Llama 3 Model Card},
  author={AI@Waktaverse},
  year={2024},
  url = {https://huggingface.co./PathFinderKR/Waktaverse-Llama-3-KO-8B-Instruct}

Llama-3

@article{llama3modelcard,
  title={Llama 3 Model Card},
  author={AI@Meta},
  year={2024},
  url = {https://github.com/meta-llama/llama3/blob/main/MODEL_CARD.md}
}

Model Card Authors

PathFinderKR

Downloads last month
4,468
Safetensors
Model size
8.03B params
Tensor type
FP16
Β·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for PathFinderKR/Waktaverse-Llama-3-KO-8B-Instruct

Finetuned
(488)
this model
Merges
1 model
Quantizations
3 models

Dataset used to train PathFinderKR/Waktaverse-Llama-3-KO-8B-Instruct

Spaces using PathFinderKR/Waktaverse-Llama-3-KO-8B-Instruct 5

Collection including PathFinderKR/Waktaverse-Llama-3-KO-8B-Instruct