Edit model card

PracticeLLM/KoSOLAR-Platypus-10.7B

Model Details

Model Developers Kyujin Han (kyujinpy)

Method
LoRA with quantization.

Base Model
yanolja/KoSOLAR-10.7B-v0.2

Dataset
kyujinpy/KOR-OpenOrca-Platypus-v3.

Hyperparameters

python finetune.py \
    --base_model yanolja/KoSOLAR-10.7B-v0.2 \
    --data-path  kyujinpy/KOR-OpenOrca-Platypus-v3 \
    --output_dir ./Ko-PlatypusSOLAR-10.7B \
    --batch_size 64 \
    --micro_batch_size 1 \
    --num_epochs 5 \
    --learning_rate 2e-5 \
    --cutoff_len 2048 \
    --val_set_size 0 \
    --lora_r 64 \
    --lora_alpha 64 \
    --lora_dropout 0.05 \
    --lora_target_modules '[embed_tokens, q_proj, k_proj, v_proj, o_proj, gate_proj, down_proj, up_proj, lm_head]' \
    --train_on_inputs False \
    --add_eos_token False \
    --group_by_length False \
    --prompt_template_name en_simple \
    --lr_scheduler 'cosine' \

Share all of things. It is my belief.

Model Benchmark

Open Ko-LLM leaderboard & lm-evaluation-harness(zero-shot)

Implementation Code

### KO-Platypus
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch

repo = "PracticeLLM/KoSOLAR-Platypus-10.7B"
OpenOrca = AutoModelForCausalLM.from_pretrained(
        repo,
        return_dict=True,
        torch_dtype=torch.float16,
        device_map='auto'
)
OpenOrca_tokenizer = AutoTokenizer.from_pretrained(repo)
Downloads last month
1,670
Safetensors
Model size
10.8B params
Tensor type
FP16
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Dataset used to train PracticeLLM/KoSOLAR-Platypus-10.7B