tiny_bert_30_intents

This model is a fine-tuned version of prajjwal1/bert-tiny on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 0.2835
  • Accuracy: 0.9245

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 2e-05
  • train_batch_size: 16
  • eval_batch_size: 16
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 50

Training results

Training Loss Epoch Step Validation Loss Accuracy
No log 1.0 335 3.1862 0.2299
3.264 2.0 670 2.9366 0.3758
2.9242 3.0 1005 2.6777 0.4648
2.9242 4.0 1340 2.4290 0.5487
2.5619 5.0 1675 2.1939 0.6174
2.2347 6.0 2010 1.9840 0.6493
2.2347 7.0 2345 1.7925 0.6862
1.9296 8.0 2680 1.6204 0.7215
1.6809 9.0 3015 1.4657 0.7550
1.6809 10.0 3350 1.3267 0.7785
1.4613 11.0 3685 1.2082 0.7987
1.2746 12.0 4020 1.0965 0.8121
1.2746 13.0 4355 1.0022 0.8473
1.1137 14.0 4690 0.9185 0.8607
0.991 15.0 5025 0.8464 0.8742
0.991 16.0 5360 0.7851 0.8792
0.8792 17.0 5695 0.7309 0.8792
0.7797 18.0 6030 0.6827 0.8842
0.7797 19.0 6365 0.6391 0.8876
0.7006 20.0 6700 0.6014 0.8943
0.6422 21.0 7035 0.5713 0.8993
0.6422 22.0 7370 0.5384 0.9044
0.5833 23.0 7705 0.5112 0.9044
0.5441 24.0 8040 0.4841 0.9077
0.5441 25.0 8375 0.4627 0.9094
0.4996 26.0 8710 0.4455 0.9128
0.4594 27.0 9045 0.4259 0.9128
0.4594 28.0 9380 0.4103 0.9161
0.4288 29.0 9715 0.3959 0.9161
0.41 30.0 10050 0.3821 0.9161
0.41 31.0 10385 0.3687 0.9161
0.383 32.0 10720 0.3595 0.9144
0.3618 33.0 11055 0.3473 0.9144
0.3618 34.0 11390 0.3412 0.9161
0.3534 35.0 11725 0.3336 0.9178
0.3277 36.0 12060 0.3275 0.9178
0.3277 37.0 12395 0.3208 0.9195
0.3164 38.0 12730 0.3143 0.9211
0.3107 39.0 13065 0.3062 0.9195
0.3107 40.0 13400 0.3031 0.9195
0.296 41.0 13735 0.2995 0.9211
0.2998 42.0 14070 0.2962 0.9211
0.2998 43.0 14405 0.2936 0.9211
0.2813 44.0 14740 0.2893 0.9228
0.2767 45.0 15075 0.2874 0.9195
0.2767 46.0 15410 0.2865 0.9228
0.2813 47.0 15745 0.2849 0.9245
0.272 48.0 16080 0.2842 0.9245
0.272 49.0 16415 0.2837 0.9245
0.2751 50.0 16750 0.2835 0.9245

Framework versions

  • Transformers 4.33.2
  • Pytorch 2.0.1+cu118
  • Datasets 2.14.5
  • Tokenizers 0.13.3
Downloads last month
26
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for m-aliabbas1/tiny_bert_30_intents

Finetuned
(52)
this model