Edit model card

test_roberta-base-uncased_fine

This model is a fine-tuned version of FacebookAI/roberta-base on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 0.6457
  • Accuracy: 0.9

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.4
  • train_batch_size: 64
  • eval_batch_size: 64
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 200

Training results

Training Loss Epoch Step Validation Loss Accuracy
30.4061 2.0 2 273.0057 0.25
34.4778 4.0 4 105.9435 0.25
142.4009 6.0 6 59.4670 0.75
163.2062 8.0 8 7.1859 0.75
18.5088 10.0 10 213.3815 0.25
26.7609 12.0 12 139.8955 0.25
2.5101 14.0 14 40.3215 0.75
227.6862 16.0 16 65.1266 0.75
172.2084 18.0 18 18.4413 0.75
8.9021 20.0 20 135.4651 0.25
18.3707 22.0 22 106.7837 0.25
4.0304 24.0 24 25.2790 0.75
151.1751 26.0 26 40.5055 0.75
90.9441 28.0 28 5.5454 0.25
13.882 30.0 30 163.9310 0.25
21.8757 32.0 32 134.5106 0.25
7.3909 34.0 34 10.2510 0.75
89.2539 36.0 36 24.8699 0.75
41.6868 38.0 38 37.6232 0.25
7.2046 40.0 40 51.7227 0.25
0.785 42.0 42 12.0718 0.75
42.4561 44.0 44 3.2471 0.25
11.7306 46.0 46 96.5822 0.25
11.0079 48.0 48 39.1965 0.25
40.6509 50.0 50 20.8798 0.75
62.2298 52.0 52 3.0547 0.75
9.1953 54.0 54 93.0102 0.25
11.3341 56.0 56 49.3249 0.25
17.2652 58.0 58 12.8465 0.75
26.3375 60.0 60 17.7896 0.25
2.3781 62.0 62 1.1520 0.25
1.7964 64.0 64 27.0024 0.25
0.3859 66.0 66 10.6897 0.75
43.8049 68.0 68 4.2678 0.75
5.7809 70.0 70 57.0428 0.25
5.8372 72.0 72 3.1891 0.25
10.2265 74.0 74 14.5562 0.25
1.3022 76.0 76 0.9098 0.75
3.1084 78.0 78 11.1294 0.25
26.8542 80.0 80 8.2287 0.75
1.3357 82.0 82 50.3411 0.25
9.7622 84.0 84 56.3544 0.25
2.3044 86.0 86 11.6592 0.75
65.6323 88.0 88 17.1945 0.75
26.2342 90.0 90 32.7401 0.25
6.1743 92.0 92 50.0640 0.25
3.0945 94.0 94 8.6429 0.75
55.6696 96.0 96 11.5617 0.75
7.9304 98.0 98 48.5339 0.25
9.4683 100.0 100 68.6256 0.25
6.9711 102.0 102 1.1185 0.25
17.9561 104.0 104 2.0326 0.75
3.8259 106.0 106 28.7270 0.25
1.4343 108.0 108 7.2848 0.75
39.3397 110.0 110 5.9552 0.75
2.2064 112.0 112 23.7964 0.25
1.8151 114.0 114 5.3844 0.75
22.2713 116.0 116 2.2849 0.75
3.3845 118.0 118 35.3856 0.25
3.4312 120.0 120 0.6936 0.25
6.1348 122.0 122 9.6259 0.25
0.4537 124.0 124 5.0600 0.75
19.9785 126.0 126 1.4862 0.25
3.5936 128.0 128 44.3517 0.25
5.9722 130.0 130 18.2233 0.25
20.099 132.0 132 9.5809 0.75
28.7009 134.0 134 2.0241 0.75
3.8411 136.0 136 39.1799 0.25
5.2586 138.0 138 21.3355 0.25
8.9217 140.0 140 5.3869 0.75
14.4647 142.0 142 6.8343 0.25
0.6143 144.0 144 2.6079 0.25
0.3063 146.0 146 0.6002 0.75
0.0396 148.0 148 20.4224 0.25
3.2237 150.0 150 14.8742 0.25
6.1285 152.0 152 2.8047 0.75
6.9479 154.0 154 15.0406 0.25
2.3471 156.0 156 13.5490 0.25
3.6263 158.0 158 1.4868 0.75
0.5022 160.0 160 17.8967 0.25
4.1843 162.0 162 18.0276 0.25
0.7145 164.0 164 4.3462 0.75
23.1296 166.0 166 6.6729 0.75
13.1642 168.0 168 3.9175 0.25
3.433 170.0 170 33.4045 0.25
5.0543 172.0 172 31.1038 0.25
2.695 174.0 174 6.2978 0.25
11.396 176.0 176 5.3975 0.75
20.3118 178.0 178 4.1132 0.75
4.1789 180.0 180 9.2370 0.25
2.806 182.0 182 16.7589 0.25
1.8424 184.0 184 7.5781 0.25
3.0288 186.0 186 1.7304 0.75
5.4305 188.0 188 0.8391 0.75
1.9329 190.0 190 9.1368 0.25
1.8576 192.0 192 12.6123 0.25
2.0932 194.0 194 8.6446 0.25
0.5404 196.0 196 1.8219 0.25
2.7355 198.0 198 0.5940 0.75
2.1872 200.0 200 0.7259 0.25

Framework versions

  • Transformers 4.44.2
  • Pytorch 2.4.1+cu121
  • Datasets 3.0.1
  • Tokenizers 0.19.1
Downloads last month
2
Safetensors
Model size
125M params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for jhonalevc1995/test_roberta-base-uncased_fine

Finetuned
(1285)
this model