roberta-large-ner-qlorafinetune-runs-colab

This model is a fine-tuned version of FacebookAI/xlm-roberta-large on the biobert_json dataset. It achieves the following results on the evaluation set:

  • Loss: 0.0681
  • Precision: 0.9324
  • Recall: 0.9599
  • F1: 0.9460
  • Accuracy: 0.9808

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0004
  • train_batch_size: 32
  • eval_batch_size: 32
  • seed: 42
  • optimizer: Use paged_adamw_8bit with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: linear
  • training_steps: 1224
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Precision Recall F1 Accuracy
2.5288 0.0654 20 1.2145 0.0084 0.0001 0.0002 0.7183
0.9544 0.1307 40 0.4841 0.7484 0.6219 0.6793 0.8849
0.4878 0.1961 60 0.2727 0.8256 0.7528 0.7875 0.9225
0.3072 0.2614 80 0.1840 0.8173 0.8716 0.8436 0.9486
0.2213 0.3268 100 0.1585 0.8248 0.9059 0.8634 0.9547
0.2246 0.3922 120 0.1568 0.8380 0.9193 0.8768 0.9552
0.1715 0.4575 140 0.1099 0.9058 0.9117 0.9087 0.9663
0.1591 0.5229 160 0.1138 0.8865 0.9488 0.9166 0.9680
0.1514 0.5882 180 0.0932 0.9002 0.9386 0.9190 0.9715
0.1216 0.6536 200 0.0903 0.9097 0.9449 0.9270 0.9729
0.134 0.7190 220 0.0949 0.9129 0.9275 0.9201 0.9715
0.1329 0.7843 240 0.1017 0.8967 0.9422 0.9189 0.9706
0.1192 0.8497 260 0.0929 0.9097 0.9367 0.9230 0.9723
0.1266 0.9150 280 0.1050 0.8881 0.9356 0.9112 0.9691
0.1332 0.9804 300 0.0963 0.9078 0.9343 0.9208 0.9716
0.1218 1.0458 320 0.0887 0.9104 0.9416 0.9257 0.9730
0.0943 1.1111 340 0.0904 0.9119 0.9469 0.9291 0.9733
0.1033 1.1765 360 0.0995 0.9035 0.9470 0.9247 0.9706
0.1053 1.2418 380 0.0829 0.9197 0.9439 0.9316 0.9766
0.1032 1.3072 400 0.0795 0.9150 0.9471 0.9308 0.9759
0.1079 1.3725 420 0.0870 0.8990 0.9285 0.9135 0.9715
0.1009 1.4379 440 0.0801 0.9250 0.9478 0.9363 0.9771
0.093 1.5033 460 0.0713 0.9341 0.9459 0.9399 0.9782
0.0909 1.5686 480 0.0762 0.9214 0.9556 0.9382 0.9774
0.0853 1.6340 500 0.0824 0.9152 0.9483 0.9315 0.9758
0.1002 1.6993 520 0.0933 0.9031 0.9539 0.9278 0.9737
0.0917 1.7647 540 0.0979 0.8713 0.9204 0.8952 0.9677
0.127 1.8301 560 0.1236 0.9003 0.9273 0.9136 0.9674
0.1221 1.8954 580 0.1022 0.9089 0.9346 0.9216 0.9711
0.1039 1.9608 600 0.0946 0.9052 0.9385 0.9215 0.9725
0.0873 2.0261 620 0.0914 0.9060 0.9521 0.9285 0.9737
0.0736 2.0915 640 0.0765 0.9228 0.9509 0.9366 0.9776
0.0584 2.1569 660 0.0795 0.9179 0.9423 0.9300 0.9761
0.0858 2.2222 680 0.0764 0.9229 0.9495 0.9360 0.9766
0.0849 2.2876 700 0.0797 0.9194 0.9420 0.9305 0.9768
0.0626 2.3529 720 0.0729 0.9327 0.9527 0.9426 0.9789
0.0725 2.4183 740 0.0747 0.9246 0.9574 0.9407 0.9781
0.0914 2.4837 760 0.0796 0.9196 0.9579 0.9383 0.9774
0.0676 2.5490 780 0.0762 0.9297 0.9572 0.9432 0.9793
0.0724 2.6144 800 0.0710 0.9388 0.9533 0.9460 0.9809
0.0635 2.6797 820 0.0757 0.9303 0.9520 0.9410 0.9780
0.0729 2.7451 840 0.0724 0.9279 0.9536 0.9406 0.9793
0.061 2.8105 860 0.0711 0.9278 0.9522 0.9399 0.9793
0.0646 2.8758 880 0.0792 0.9207 0.9544 0.9372 0.9767
0.0602 2.9412 900 0.0721 0.9246 0.9549 0.9395 0.9785
0.0568 3.0065 920 0.0685 0.9333 0.9540 0.9435 0.9804
0.0518 3.0719 940 0.0742 0.9239 0.9574 0.9403 0.9789
0.0547 3.1373 960 0.0798 0.9209 0.9573 0.9387 0.9778
0.0454 3.2026 980 0.0697 0.9366 0.9564 0.9464 0.9810
0.0549 3.2680 1000 0.0753 0.9253 0.9606 0.9426 0.9785
0.0534 3.3333 1020 0.0690 0.9345 0.9574 0.9458 0.9808
0.0527 3.3987 1040 0.0681 0.9297 0.9604 0.9448 0.9801
0.057 3.4641 1060 0.0672 0.9346 0.9585 0.9464 0.9812
0.0482 3.5294 1080 0.0705 0.9268 0.9569 0.9416 0.9801
0.0482 3.5948 1100 0.0689 0.9304 0.9566 0.9433 0.9804
0.0412 3.6601 1120 0.0670 0.9345 0.9609 0.9475 0.9815
0.0565 3.7255 1140 0.0676 0.9334 0.9603 0.9467 0.9810
0.0509 3.7908 1160 0.0672 0.9347 0.9615 0.9479 0.9814
0.0566 3.8562 1180 0.0684 0.9316 0.9601 0.9457 0.9806
0.0602 3.9216 1200 0.0690 0.9317 0.9601 0.9457 0.9805
0.0585 3.9869 1220 0.0681 0.9324 0.9599 0.9460 0.9808

Framework versions

  • PEFT 0.13.2
  • Transformers 4.47.0
  • Pytorch 2.5.1+cu121
  • Datasets 3.2.0
  • Tokenizers 0.21.0
Downloads last month
6
Inference API
Unable to determine this model’s pipeline type. Check the docs .

Model tree for Buho89/roberta-large-ner-qlorafinetune-runs-colab

Adapter
(22)
this model