tabert-1k-naamapadam
This model is a fine-tuned version of livinNector/tabert-1k on the None dataset. It achieves the following results on the evaluation set:
- Loss: 0.2825
- Precision: 0.7764
- Recall: 0.8055
- F1: 0.7907
- Accuracy: 0.9068
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 64
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
Training results
Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
---|---|---|---|---|---|---|---|
0.4618 | 0.05 | 400 | 0.3963 | 0.7329 | 0.6498 | 0.6889 | 0.8716 |
0.3869 | 0.1 | 800 | 0.3583 | 0.7145 | 0.7347 | 0.7244 | 0.8828 |
0.3642 | 0.15 | 1200 | 0.3511 | 0.7241 | 0.7412 | 0.7325 | 0.8842 |
0.3533 | 0.21 | 1600 | 0.3451 | 0.7393 | 0.7429 | 0.7411 | 0.8873 |
0.3501 | 0.26 | 2000 | 0.3367 | 0.7456 | 0.7562 | 0.7509 | 0.8899 |
0.3369 | 0.31 | 2400 | 0.3343 | 0.7476 | 0.7549 | 0.7512 | 0.8909 |
0.3302 | 0.36 | 2800 | 0.3282 | 0.7413 | 0.7584 | 0.7497 | 0.8926 |
0.3327 | 0.41 | 3200 | 0.3238 | 0.7584 | 0.7717 | 0.7650 | 0.8961 |
0.3248 | 0.46 | 3600 | 0.3209 | 0.7468 | 0.7795 | 0.7628 | 0.8956 |
0.3175 | 0.51 | 4000 | 0.3140 | 0.7659 | 0.7681 | 0.7670 | 0.8985 |
0.3132 | 0.57 | 4400 | 0.3111 | 0.7537 | 0.7795 | 0.7664 | 0.8970 |
0.3141 | 0.62 | 4800 | 0.3122 | 0.7529 | 0.7797 | 0.7661 | 0.8972 |
0.3077 | 0.67 | 5200 | 0.3138 | 0.7493 | 0.7844 | 0.7665 | 0.8974 |
0.309 | 0.72 | 5600 | 0.3099 | 0.7674 | 0.7729 | 0.7702 | 0.8992 |
0.3085 | 0.77 | 6000 | 0.3038 | 0.7626 | 0.7940 | 0.7780 | 0.9009 |
0.3031 | 0.82 | 6400 | 0.3055 | 0.7633 | 0.7834 | 0.7732 | 0.8992 |
0.2958 | 0.87 | 6800 | 0.3054 | 0.7621 | 0.7924 | 0.7770 | 0.8991 |
0.2953 | 0.93 | 7200 | 0.3076 | 0.7714 | 0.7834 | 0.7774 | 0.9005 |
0.2978 | 0.98 | 7600 | 0.3003 | 0.7729 | 0.7855 | 0.7792 | 0.9017 |
0.2826 | 1.03 | 8000 | 0.3016 | 0.7665 | 0.7905 | 0.7783 | 0.9012 |
0.2757 | 1.08 | 8400 | 0.3053 | 0.7520 | 0.8072 | 0.7786 | 0.8996 |
0.2751 | 1.13 | 8800 | 0.3026 | 0.7626 | 0.7982 | 0.7800 | 0.9008 |
0.2694 | 1.18 | 9200 | 0.2957 | 0.7682 | 0.8007 | 0.7841 | 0.9039 |
0.2723 | 1.23 | 9600 | 0.2944 | 0.7698 | 0.8005 | 0.7849 | 0.9039 |
0.2726 | 1.29 | 10000 | 0.2912 | 0.7774 | 0.7930 | 0.7851 | 0.9042 |
0.2674 | 1.34 | 10400 | 0.2912 | 0.7739 | 0.7973 | 0.7854 | 0.9043 |
0.2714 | 1.39 | 10800 | 0.2907 | 0.7729 | 0.7995 | 0.7860 | 0.9036 |
0.2625 | 1.44 | 11200 | 0.2949 | 0.7716 | 0.7965 | 0.7838 | 0.9041 |
0.2669 | 1.49 | 11600 | 0.2883 | 0.7701 | 0.8087 | 0.7889 | 0.9054 |
0.2601 | 1.54 | 12000 | 0.2868 | 0.7759 | 0.8069 | 0.7911 | 0.9066 |
0.2633 | 1.59 | 12400 | 0.2895 | 0.7659 | 0.8125 | 0.7885 | 0.9051 |
0.2641 | 1.65 | 12800 | 0.2878 | 0.7790 | 0.7972 | 0.7880 | 0.9059 |
0.2661 | 1.7 | 13200 | 0.2875 | 0.7800 | 0.7999 | 0.7898 | 0.9068 |
0.2719 | 1.75 | 13600 | 0.2853 | 0.7783 | 0.8025 | 0.7902 | 0.9070 |
0.2602 | 1.8 | 14000 | 0.2827 | 0.7801 | 0.8051 | 0.7924 | 0.9070 |
0.2688 | 1.85 | 14400 | 0.2819 | 0.7742 | 0.8061 | 0.7898 | 0.9066 |
0.2615 | 1.9 | 14800 | 0.2828 | 0.7764 | 0.8017 | 0.7888 | 0.9065 |
0.2623 | 1.95 | 15200 | 0.2825 | 0.7764 | 0.8055 | 0.7907 | 0.9068 |
Framework versions
- Transformers 4.29.2
- Pytorch 2.0.0
- Datasets 2.12.0
- Tokenizers 0.13.3
- Downloads last month
- 19
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.