yaml-generator-code-llama

This model is a fine-tuned version of codellama/CodeLlama-7b-hf on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 2.2591

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0003
  • train_batch_size: 32
  • eval_batch_size: 8
  • seed: 42
  • gradient_accumulation_steps: 4
  • total_train_batch_size: 128
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 100
  • training_steps: 400

Training results

Training Loss Epoch Step Validation Loss
0.2925 20.0 20 1.2292
0.2156 40.0 40 0.9743
0.07 60.0 60 1.4266
0.007 80.0 80 2.0256
0.0041 100.0 100 1.9838
0.0015 120.0 120 2.0320
0.0012 140.0 140 2.0818
0.0012 160.0 160 2.1403
0.0012 180.0 180 2.1771
0.0012 200.0 200 2.1751
0.0012 220.0 220 2.1825
0.0012 240.0 240 2.2240
0.0012 260.0 260 2.2226
0.0012 280.0 280 2.2172
0.0012 300.0 300 2.2235
0.0012 320.0 320 2.2202
0.0012 340.0 340 2.2471
0.0012 360.0 360 2.2475
0.0012 380.0 380 2.2709
0.0012 400.0 400 2.2591

Framework versions

  • Transformers 4.34.0
  • Pytorch 2.0.1+cu118
  • Datasets 2.14.5
  • Tokenizers 0.14.1
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Examples
Unable to determine this model's library. Check the docs .

Model tree for peterbeamish/yaml-generator-code-llama

Finetuned
(56)
this model