cs221-afro-xlmr-large-amh-finetuned-20-epochs
This model is a fine-tuned version of Davlan/afro-xlmr-large on the None dataset. It achieves the following results on the evaluation set:
- Loss: 0.2541
- F1: 0.7241
- Roc Auc: 0.8299
- Accuracy: 0.5197
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 100
- num_epochs: 10
Training results
Training Loss | Epoch | Step | Validation Loss | F1 | Roc Auc | Accuracy |
---|---|---|---|---|---|---|
0.4225 | 1.0 | 89 | 0.3527 | 0.3104 | 0.5903 | 0.2761 |
0.2896 | 2.0 | 178 | 0.2815 | 0.6194 | 0.7431 | 0.4606 |
0.2408 | 3.0 | 267 | 0.2444 | 0.7125 | 0.8140 | 0.5099 |
0.1923 | 4.0 | 356 | 0.2497 | 0.7108 | 0.8113 | 0.5296 |
0.1707 | 5.0 | 445 | 0.2541 | 0.7241 | 0.8299 | 0.5197 |
0.132 | 6.0 | 534 | 0.2664 | 0.7073 | 0.8155 | 0.5113 |
0.1004 | 7.0 | 623 | 0.2756 | 0.7183 | 0.8216 | 0.5211 |
0.0841 | 8.0 | 712 | 0.2846 | 0.7229 | 0.8291 | 0.5183 |
0.0719 | 9.0 | 801 | 0.2910 | 0.7225 | 0.8290 | 0.5155 |
0.0667 | 10.0 | 890 | 0.2898 | 0.7212 | 0.8268 | 0.5169 |
Framework versions
- Transformers 4.47.1
- Pytorch 2.5.1+cu121
- Datasets 3.2.0
- Tokenizers 0.21.0
- Downloads last month
- 0
Inference Providers
NEW
This model is not currently available via any of the supported Inference Providers.
Model tree for sercetexam9/cs221-afro-xlmr-large-amh-finetuned-20-epochs
Base model
Davlan/afro-xlmr-large