Edit model card

Llama-2-7b-hf_oasst1_l0.0002_64

This model is a fine-tuned version of meta-llama/Llama-2-7b-hf on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 1.3565

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0002
  • train_batch_size: 1
  • eval_batch_size: 1
  • seed: 0
  • gradient_accumulation_steps: 16
  • total_train_batch_size: 16
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: constant
  • lr_scheduler_warmup_ratio: 0.03
  • training_steps: 1875

Training results

Training Loss Epoch Step Validation Loss
1.401 0.0018 1 1.6289
1.4429 0.3392 187 1.2571
1.1074 0.6783 374 1.2502
1.3471 1.0175 561 1.2494
1.2256 1.3566 748 1.2493
1.2047 1.6958 935 1.2460
0.8848 2.0349 1122 1.2662
1.0369 2.3741 1309 1.3172
0.8789 2.7132 1496 1.3079
0.736 3.0524 1683 1.3552
0.7258 3.3915 1870 1.3486

Framework versions

  • PEFT 0.12.1.dev0
  • Transformers 4.45.0.dev0
  • Pytorch 2.3.0+cu121
  • Datasets 2.19.0
  • Tokenizers 0.19.1
Downloads last month
6
Inference API
Unable to determine this model’s pipeline type. Check the docs .

Model tree for alexander-hm/Llama-2-7b-hf_oasst1_l0.0002_64

Adapter
(1014)
this model