afro-xlmr-base-finetuned-augmentation-LUNAR
This model is a fine-tuned version of Davlan/afro-xlmr-base on the None dataset. It achieves the following results on the evaluation set:
- Loss: 0.3494
- F1: 0.7257
- Roc Auc: 0.8196
- Accuracy: 0.5305
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 100
- num_epochs: 30
Training results
Training Loss | Epoch | Step | Validation Loss | F1 | Roc Auc | Accuracy |
---|---|---|---|---|---|---|
0.4381 | 1.0 | 144 | 0.4182 | 0.1435 | 0.5480 | 0.2164 |
0.3725 | 2.0 | 288 | 0.3353 | 0.5562 | 0.7123 | 0.4258 |
0.304 | 3.0 | 432 | 0.3059 | 0.6420 | 0.7561 | 0.4834 |
0.2326 | 4.0 | 576 | 0.3015 | 0.6706 | 0.7808 | 0.5131 |
0.2242 | 5.0 | 720 | 0.3015 | 0.6902 | 0.7942 | 0.5009 |
0.1752 | 6.0 | 864 | 0.3019 | 0.7017 | 0.8020 | 0.5166 |
0.1378 | 7.0 | 1008 | 0.2968 | 0.7029 | 0.8012 | 0.5323 |
0.1141 | 8.0 | 1152 | 0.3105 | 0.7012 | 0.8031 | 0.5218 |
0.0951 | 9.0 | 1296 | 0.3298 | 0.7112 | 0.8125 | 0.5201 |
0.085 | 10.0 | 1440 | 0.3288 | 0.6955 | 0.7956 | 0.5323 |
0.0693 | 11.0 | 1584 | 0.3553 | 0.7029 | 0.8108 | 0.5201 |
0.0579 | 12.0 | 1728 | 0.3494 | 0.7257 | 0.8196 | 0.5305 |
0.045 | 13.0 | 1872 | 0.3637 | 0.7165 | 0.8155 | 0.5288 |
0.0395 | 14.0 | 2016 | 0.3993 | 0.7177 | 0.8205 | 0.5201 |
0.0383 | 15.0 | 2160 | 0.4184 | 0.7099 | 0.8129 | 0.5113 |
0.0275 | 16.0 | 2304 | 0.4123 | 0.7151 | 0.8170 | 0.5288 |
Framework versions
- Transformers 4.45.1
- Pytorch 2.4.0
- Datasets 3.0.1
- Tokenizers 0.20.0
- Downloads last month
- 28
Inference Providers
NEW
This model is not currently available via any of the supported Inference Providers.
Model tree for sercetexam9/afro-xlmr-base-finetuned-augmentation-LUNAR
Base model
Davlan/afro-xlmr-base