image/png

Dumpling-Qwen2.5-1.5B-v2

nbeerbower/EVA-abliterated-TIES-Qwen2.5-1.5B finetuned on:

Method

QLoRA ORPO tune with 2x RTX 3090 for 2 epochs.

# QLoRA config
bnb_config = BitsAndBytesConfig(
    load_in_4bit=True,
    bnb_4bit_quant_type="nf4",
    bnb_4bit_compute_dtype=torch_dtype,
    bnb_4bit_use_double_quant=True,
)

# LoRA config
peft_config = LoraConfig(
    r=64,
    lora_alpha=64,
    lora_dropout=0.05,
    bias="none",
    task_type="CAUSAL_LM",
    target_modules=['up_proj', 'down_proj', 'gate_proj', 'k_proj', 'q_proj', 'v_proj', 'o_proj']
)

# Training config
orpo_args = ORPOConfig(
    run_name=new_model,
    learning_rate=2e-5,
    lr_scheduler_type="linear",
    max_length=2048,
    max_prompt_length=1024,
    max_completion_length=1024,
    beta=0.1,
    per_device_train_batch_size=1,
    per_device_eval_batch_size=1,
    gradient_accumulation_steps=8,
    optim="paged_adamw_8bit",
    num_train_epochs=2,
    evaluation_strategy="steps",
    eval_steps=0.2,
    logging_steps=1,
    warmup_steps=10,
    max_grad_norm=10,
    report_to="wandb",
    output_dir="./results/",
    bf16=True,
)
Downloads last month
60
Safetensors
Model size
1.54B params
Tensor type
BF16
·
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and the model is not deployed on the HF Inference API.

Model tree for nbeerbower/Dumpling-Qwen2.5-1.5B-v2

Finetuned
(2)
this model
Finetunes
4 models
Quantizations
8 models

Datasets used to train nbeerbower/Dumpling-Qwen2.5-1.5B-v2