Uploaded model

  • Developed by: UsernameJustAnother
  • License: apache-2.0
  • Finetuned from model : unsloth/Mistral-Nemo-Instruct-2407

Experimental RP Finetune with secret sauce dataset, rsLoRA, r = 64, on an Colab A100 instance. 30GB vRAM used, 2 epochs ~ 3hrs of training.

==((====))==  Unsloth - 2x faster free finetuning | Num GPUs = 1
   \\   /|    Num examples = 8,160 | Num Epochs = 2
O^O/ \_/ \    Batch size per device = 2 | Gradient Accumulation steps = 4
\        /    Total batch size = 8 | Total steps = 2,040
 "-____-"     Number of trainable parameters = 228,065,280


        r = 64, 
        target_modules = ["q_proj", "k_proj", "v_proj", "o_proj",
                          "gate_proj", "up_proj", "down_proj",],
        lora_alpha = 64, 
        lora_dropout = 0, # Supports any, but = 0 is optimized
        bias = "none",    # Supports any, but = "none" is optimized
        use_gradient_checkpointing = "unsloth", # True or "unsloth" for very long context
        random_state = 3407,
        use_rslora = True,  # lora_alpha --> 8
        loftq_config = None,

        per_device_train_batch_size = 2,
        gradient_accumulation_steps = 4,
        warmup_steps = 5,
        num_train_epochs = 2, 
        learning_rate = 2e-5, # down from 2e-4, could go down to (5e-5 then 1e-5)
        fp16 = not is_bfloat16_supported(),
        bf16 = is_bfloat16_supported(),
        logging_steps = 1,
        optim = "adamw_8bit", 
        weight_decay = 0.01,
        lr_scheduler_type = "linear",
        seed = 3407,

This mistral model was trained 2x faster with Unsloth and Huggingface's TRL library.

Downloads last month
21
Safetensors
Model size
12.2B params
Tensor type
BF16
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for UsernameJustAnother/Nemo-12B-Marlin-v1

Finetuned
(22)
this model
Quantizations
2 models