tiny_bert_31_erc_intents

This model is a fine-tuned version of prajjwal1/bert-tiny on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 0.3615
  • Accuracy: 0.9485

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 2e-05
  • train_batch_size: 16
  • eval_batch_size: 16
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 50

Training results

Training Loss Epoch Step Validation Loss Accuracy
No log 1.0 273 3.2312 0.2247
3.2677 2.0 546 3.0029 0.4454
3.2677 3.0 819 2.7765 0.5196
2.8826 4.0 1092 2.5522 0.6
2.8826 5.0 1365 2.3412 0.6371
2.4879 6.0 1638 2.1518 0.6577
2.4879 7.0 1911 1.9724 0.6742
2.137 8.0 2184 1.8147 0.6928
2.137 9.0 2457 1.6748 0.7216
1.8286 10.0 2730 1.5458 0.7608
1.592 11.0 3003 1.4381 0.7876
1.592 12.0 3276 1.3335 0.8
1.3969 13.0 3549 1.2423 0.8124
1.3969 14.0 3822 1.1551 0.8330
1.2209 15.0 4095 1.0825 0.8474
1.2209 16.0 4368 1.0141 0.8536
1.0896 17.0 4641 0.9527 0.8763
1.0896 18.0 4914 0.8963 0.8825
0.9763 19.0 5187 0.8472 0.8948
0.9763 20.0 5460 0.7968 0.9052
0.8742 21.0 5733 0.7577 0.9072
0.784 22.0 6006 0.7193 0.9113
0.784 23.0 6279 0.6834 0.9113
0.7159 24.0 6552 0.6500 0.9196
0.7159 25.0 6825 0.6224 0.9196
0.6496 26.0 7098 0.5931 0.9237
0.6496 27.0 7371 0.5679 0.9278
0.5962 28.0 7644 0.5459 0.9258
0.5962 29.0 7917 0.5243 0.9320
0.5553 30.0 8190 0.5065 0.9361
0.5553 31.0 8463 0.4888 0.9320
0.517 32.0 8736 0.4732 0.9340
0.4827 33.0 9009 0.4607 0.9361
0.4827 34.0 9282 0.4477 0.9381
0.4479 35.0 9555 0.4346 0.9423
0.4479 36.0 9828 0.4240 0.9423
0.4231 37.0 10101 0.4153 0.9485
0.4231 38.0 10374 0.4065 0.9464
0.4157 39.0 10647 0.3990 0.9464
0.4157 40.0 10920 0.3915 0.9464
0.3914 41.0 11193 0.3866 0.9464
0.3914 42.0 11466 0.3810 0.9443
0.3747 43.0 11739 0.3763 0.9464
0.3684 44.0 12012 0.3727 0.9505
0.3684 45.0 12285 0.3690 0.9505
0.3605 46.0 12558 0.3662 0.9505
0.3605 47.0 12831 0.3643 0.9485
0.3499 48.0 13104 0.3628 0.9485
0.3499 49.0 13377 0.3618 0.9485
0.3514 50.0 13650 0.3615 0.9485

Framework versions

  • Transformers 4.33.3
  • Pytorch 2.0.1+cu118
  • Datasets 2.14.5
  • Tokenizers 0.13.3
Downloads last month
20
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for m-aliabbas1/tiny_bert_31_erc_intents

Finetuned
(52)
this model