guru1984

This model is a fine-tuned version of microsoft/Phi-3.5-mini-instruct on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 1.9331

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0005
  • train_batch_size: 8
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_ratio: 0.1
  • num_epochs: 3

Training results

Training Loss Epoch Step Validation Loss
4.0802 0.0786 50 3.3923
2.6833 0.1572 100 2.3272
2.1534 0.2358 150 2.1625
2.0476 0.3145 200 2.1005
1.9929 0.3931 250 2.0717
1.974 0.4717 300 2.0509
1.9854 0.5503 350 2.0435
2.0109 0.6289 400 2.0358
1.9939 0.7075 450 2.0285
1.9672 0.7862 500 2.0150
1.9565 0.8648 550 2.0115
1.9706 0.9434 600 2.0062
1.9352 1.0220 650 1.9949
1.8647 1.1006 700 1.9930
1.9109 1.1792 750 1.9840
1.885 1.2579 800 1.9806
1.8864 1.3365 850 1.9878
1.8931 1.4151 900 1.9825
1.8599 1.4937 950 1.9755
1.9019 1.5723 1000 1.9695
1.9081 1.6509 1050 1.9599
1.8736 1.7296 1100 1.9583
1.8939 1.8082 1150 1.9567
1.8867 1.8868 1200 1.9534
1.8875 1.9654 1250 1.9479
1.8677 2.0440 1300 1.9498
1.8152 2.1226 1350 1.9527
1.8573 2.2013 1400 1.9523
1.8433 2.2799 1450 1.9441
1.828 2.3585 1500 1.9455
1.8298 2.4371 1550 1.9423
1.8258 2.5157 1600 1.9376
1.8314 2.5943 1650 1.9373
1.8436 2.6730 1700 1.9394
1.8253 2.7516 1750 1.9373
1.8055 2.8302 1800 1.9365
1.8207 2.9088 1850 1.9342
1.8514 2.9874 1900 1.9331

Framework versions

  • PEFT 0.12.0
  • Transformers 4.44.0
  • Pytorch 2.4.0
  • Datasets 2.21.0
  • Tokenizers 0.19.1
Downloads last month
33
Inference API
Unable to determine this model’s pipeline type. Check the docs .

Model tree for emdemor/guru1984

Adapter
(159)
this model