Edit model card

roberta-large-sst-2-64-13

This model is a fine-tuned version of roberta-large on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 0.7488
  • Accuracy: 0.9141

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 1e-05
  • train_batch_size: 32
  • eval_batch_size: 32
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 50
  • num_epochs: 150

Training results

Training Loss Epoch Step Validation Loss Accuracy
No log 1.0 4 0.7118 0.5
No log 2.0 8 0.7101 0.5
0.7289 3.0 12 0.7072 0.5
0.7289 4.0 16 0.7042 0.5
0.6989 5.0 20 0.6999 0.5
0.6989 6.0 24 0.6966 0.5
0.6989 7.0 28 0.6938 0.5
0.6959 8.0 32 0.6938 0.5
0.6959 9.0 36 0.6990 0.4766
0.6977 10.0 40 0.6931 0.5
0.6977 11.0 44 0.6854 0.5156
0.6977 12.0 48 0.6882 0.6016
0.6514 13.0 52 0.6495 0.7578
0.6514 14.0 56 0.5930 0.7656
0.5232 15.0 60 0.5280 0.8203
0.5232 16.0 64 0.4286 0.875
0.5232 17.0 68 0.2916 0.8906
0.2793 18.0 72 0.3444 0.9141
0.2793 19.0 76 0.4673 0.8984
0.0537 20.0 80 0.4232 0.9062
0.0537 21.0 84 0.4351 0.9297
0.0537 22.0 88 0.5124 0.9297
0.0032 23.0 92 0.4585 0.9375
0.0032 24.0 96 0.5067 0.9219
0.0016 25.0 100 0.5244 0.9375
0.0016 26.0 104 0.7050 0.9141
0.0016 27.0 108 0.5847 0.9297
0.0004 28.0 112 0.5744 0.9297
0.0004 29.0 116 0.5828 0.9375
0.0001 30.0 120 0.5884 0.9375
0.0001 31.0 124 0.5931 0.9375
0.0001 32.0 128 0.5983 0.9375
0.0001 33.0 132 0.6038 0.9375
0.0001 34.0 136 0.6076 0.9375
0.0001 35.0 140 0.6083 0.9375
0.0001 36.0 144 0.7169 0.9219
0.0001 37.0 148 0.6166 0.9375
0.0336 38.0 152 0.8108 0.9141
0.0336 39.0 156 0.7454 0.9141
0.0348 40.0 160 0.6944 0.9141
0.0348 41.0 164 0.7467 0.9141
0.0348 42.0 168 0.6764 0.9141
0.0402 43.0 172 0.6839 0.9219
0.0402 44.0 176 0.7118 0.9219
0.0002 45.0 180 0.6943 0.9219
0.0002 46.0 184 0.7469 0.9141
0.0002 47.0 188 0.7264 0.9219
0.0001 48.0 192 0.7112 0.9219
0.0001 49.0 196 0.6948 0.9219
0.0001 50.0 200 0.8408 0.9062
0.0001 51.0 204 0.7876 0.9141
0.0001 52.0 208 0.7271 0.9219
0.0001 53.0 212 0.8016 0.9141
0.0001 54.0 216 0.8336 0.9062
0.0148 55.0 220 0.7701 0.9219
0.0148 56.0 224 0.8717 0.9062
0.0148 57.0 228 0.8018 0.9141
0.0001 58.0 232 0.8777 0.9062
0.0001 59.0 236 0.9158 0.9062
0.0001 60.0 240 0.9356 0.8984
0.0001 61.0 244 0.7494 0.9062
0.0001 62.0 248 0.6708 0.9219
0.0298 63.0 252 0.6649 0.9141
0.0298 64.0 256 0.7463 0.9062
0.0285 65.0 260 0.8065 0.8984
0.0285 66.0 264 0.8267 0.9062
0.0285 67.0 268 0.8447 0.8984
0.0001 68.0 272 0.8409 0.8984
0.0001 69.0 276 0.6652 0.9219
0.0005 70.0 280 0.6507 0.9219
0.0005 71.0 284 0.6889 0.9062
0.0005 72.0 288 0.6652 0.9062
0.0296 73.0 292 0.6454 0.9062
0.0296 74.0 296 0.6368 0.9062
0.0002 75.0 300 0.6396 0.9062
0.0002 76.0 304 0.6505 0.9062
0.0002 77.0 308 0.6620 0.9062
0.0002 78.0 312 0.6734 0.9062
0.0002 79.0 316 0.6846 0.9062
0.0002 80.0 320 0.6951 0.9062
0.0002 81.0 324 0.7038 0.9062
0.0002 82.0 328 0.7116 0.9062
0.0002 83.0 332 0.7187 0.9062
0.0002 84.0 336 0.7250 0.9062
0.0002 85.0 340 0.6930 0.9141
0.0002 86.0 344 0.6856 0.9219
0.0002 87.0 348 0.7474 0.9141
0.0227 88.0 352 0.6506 0.9219
0.0227 89.0 356 0.6457 0.9219
0.0001 90.0 360 0.7022 0.9141
0.0001 91.0 364 0.7275 0.9062
0.0001 92.0 368 0.7375 0.9141
0.0001 93.0 372 0.8008 0.9062
0.0001 94.0 376 0.6855 0.9141
0.0053 95.0 380 0.5869 0.9375
0.0053 96.0 384 0.6060 0.9297
0.0053 97.0 388 0.5990 0.9297
0.0001 98.0 392 0.6250 0.9141
0.0001 99.0 396 0.6505 0.9141
0.0001 100.0 400 0.6577 0.9141
0.0001 101.0 404 0.6594 0.9141
0.0001 102.0 408 0.6602 0.9141
0.0001 103.0 412 0.6610 0.9219
0.0001 104.0 416 0.6622 0.9141
0.037 105.0 420 0.6055 0.9297
0.037 106.0 424 0.5915 0.9297
0.037 107.0 428 0.6261 0.9297
0.0001 108.0 432 0.6679 0.9219
0.0001 109.0 436 0.7106 0.9219
0.0001 110.0 440 0.7223 0.9219
0.0001 111.0 444 0.7267 0.9141
0.0001 112.0 448 0.7287 0.9141
0.0001 113.0 452 0.7298 0.9141
0.0001 114.0 456 0.7306 0.9141
0.0001 115.0 460 0.7314 0.9141
0.0001 116.0 464 0.7323 0.9141
0.0001 117.0 468 0.7333 0.9141
0.0001 118.0 472 0.7342 0.9141
0.0001 119.0 476 0.7351 0.9141
0.0001 120.0 480 0.7359 0.9141
0.0001 121.0 484 0.7369 0.9141
0.0001 122.0 488 0.7379 0.9141
0.0001 123.0 492 0.7388 0.9141
0.0001 124.0 496 0.7396 0.9141
0.0001 125.0 500 0.7403 0.9141
0.0001 126.0 504 0.7410 0.9141
0.0001 127.0 508 0.7417 0.9141
0.0001 128.0 512 0.7423 0.9141
0.0001 129.0 516 0.7429 0.9141
0.0001 130.0 520 0.7435 0.9141
0.0001 131.0 524 0.7440 0.9141
0.0001 132.0 528 0.7446 0.9141
0.0001 133.0 532 0.7450 0.9141
0.0001 134.0 536 0.7455 0.9141
0.0001 135.0 540 0.7459 0.9141
0.0001 136.0 544 0.7463 0.9141
0.0001 137.0 548 0.7466 0.9141
0.0001 138.0 552 0.7470 0.9141
0.0001 139.0 556 0.7473 0.9141
0.0001 140.0 560 0.7475 0.9141
0.0001 141.0 564 0.7478 0.9141
0.0001 142.0 568 0.7480 0.9141
0.0001 143.0 572 0.7482 0.9141
0.0001 144.0 576 0.7483 0.9141
0.0001 145.0 580 0.7485 0.9141
0.0001 146.0 584 0.7486 0.9141
0.0001 147.0 588 0.7487 0.9141
0.0001 148.0 592 0.7488 0.9141
0.0001 149.0 596 0.7488 0.9141
0.0001 150.0 600 0.7488 0.9141

Framework versions

  • Transformers 4.32.0.dev0
  • Pytorch 2.0.1+cu118
  • Datasets 2.4.0
  • Tokenizers 0.13.3
Downloads last month
5
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for simonycl/roberta-large-sst-2-64-13

Finetuned
(273)
this model