Edit model card

my_awesome_qa_model

This model is a fine-tuned version of distilbert/distilbert-base-uncased on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 1.3052

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 2e-07
  • train_batch_size: 32
  • eval_batch_size: 32
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 100

Training results

Training Loss Epoch Step Validation Loss
4.6796 1.0 2190 4.4264
3.9385 2.0 4380 3.7109
3.3803 3.0 6570 3.2006
3.0145 4.0 8760 2.7950
2.7776 5.0 10950 2.5765
2.595 6.0 13140 2.4387
2.4978 7.0 15330 2.3404
2.3957 8.0 17520 2.2612
2.3229 9.0 19710 2.1812
2.2338 10.0 21900 2.0971
2.1596 11.0 24090 2.0173
2.0972 12.0 26280 1.9428
2.0085 13.0 28470 1.8775
1.9591 14.0 30660 1.8191
1.9021 15.0 32850 1.7753
1.8743 16.0 35040 1.7351
1.8223 17.0 37230 1.7036
1.8064 18.0 39420 1.6734
1.7535 19.0 41610 1.6507
1.7349 20.0 43800 1.6266
1.7017 21.0 45990 1.6077
1.6698 22.0 48180 1.5890
1.6592 23.0 50370 1.5778
1.6407 24.0 52560 1.5616
1.6127 25.0 54750 1.5478
1.6082 26.0 56940 1.5328
1.5979 27.0 59130 1.5232
1.5655 28.0 61320 1.5127
1.5408 29.0 63510 1.5034
1.5523 30.0 65700 1.4931
1.5291 31.0 67890 1.4841
1.527 32.0 70080 1.4731
1.5099 33.0 72270 1.4676
1.4846 34.0 74460 1.4564
1.4928 35.0 76650 1.4504
1.4743 36.0 78840 1.4432
1.4605 37.0 81030 1.4395
1.452 38.0 83220 1.4314
1.4617 39.0 85410 1.4257
1.4633 40.0 87600 1.4198
1.4551 41.0 89790 1.4143
1.4227 42.0 91980 1.4074
1.4208 43.0 94170 1.4050
1.4008 44.0 96360 1.3999
1.4075 45.0 98550 1.3966
1.4032 46.0 100740 1.3916
1.368 47.0 102930 1.3884
1.3802 48.0 105120 1.3843
1.3914 49.0 107310 1.3807
1.3692 50.0 109500 1.3765
1.3698 51.0 111690 1.3722
1.3597 52.0 113880 1.3684
1.3551 53.0 116070 1.3663
1.3498 54.0 118260 1.3628
1.3428 55.0 120450 1.3608
1.3367 56.0 122640 1.3573
1.3202 57.0 124830 1.3549
1.346 58.0 127020 1.3499
1.3268 59.0 129210 1.3488
1.3253 60.0 131400 1.3468
1.3132 61.0 133590 1.3438
1.3247 62.0 135780 1.3425
1.3222 63.0 137970 1.3397
1.3045 64.0 140160 1.3381
1.3096 65.0 142350 1.3345
1.3131 66.0 144540 1.3334
1.284 67.0 146730 1.3331
1.2991 68.0 148920 1.3294
1.2794 69.0 151110 1.3280
1.2992 70.0 153300 1.3278
1.2884 71.0 155490 1.3259
1.2934 72.0 157680 1.3235
1.2778 73.0 159870 1.3222
1.2771 74.0 162060 1.3205
1.2846 75.0 164250 1.3190
1.2666 76.0 166440 1.3193
1.2828 77.0 168630 1.3170
1.2804 78.0 170820 1.3164
1.283 79.0 173010 1.3149
1.2621 80.0 175200 1.3139
1.2779 81.0 177390 1.3136
1.2633 82.0 179580 1.3125
1.2596 83.0 181770 1.3116
1.2653 84.0 183960 1.3103
1.2715 85.0 186150 1.3088
1.2553 86.0 188340 1.3095
1.2688 87.0 190530 1.3093
1.2496 88.0 192720 1.3086
1.2683 89.0 194910 1.3080
1.242 90.0 197100 1.3078
1.2619 91.0 199290 1.3065
1.2662 92.0 201480 1.3063
1.2557 93.0 203670 1.3059
1.2623 94.0 205860 1.3057
1.2402 95.0 208050 1.3056
1.2389 96.0 210240 1.3054
1.2653 97.0 212430 1.3053
1.2365 98.0 214620 1.3052
1.2637 99.0 216810 1.3052
1.2375 100.0 219000 1.3052

Framework versions

  • Transformers 4.39.3
  • Pytorch 2.1.2+cu121
  • Datasets 2.18.0
  • Tokenizers 0.15.2
Downloads last month
7
Safetensors
Model size
66.4M params
Tensor type
F32
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for lash/my_awesome_qa_model

Finetuned
(6687)
this model