layoutlmv2-base-uncased_finetuned_docvqa

This model is a fine-tuned version of microsoft/layoutlmv2-base-uncased on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 5.0085

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-05
  • train_batch_size: 4
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 20

Training results

Training Loss Epoch Step Validation Loss
5.3352 0.22 50 4.5120
4.3566 0.44 100 4.0171
3.9989 0.66 150 3.9234
3.8014 0.88 200 3.5051
3.5509 1.11 250 3.5408
3.1372 1.33 300 3.2247
2.9307 1.55 350 3.1225
2.928 1.77 400 2.9461
2.7004 1.99 450 2.5206
2.1271 2.21 500 2.6079
2.1387 2.43 550 2.8524
1.9593 2.65 600 2.8749
2.0105 2.88 650 2.6666
1.84 3.1 700 3.0599
1.9359 3.32 750 3.0472
1.547 3.54 800 2.2308
1.4161 3.76 850 2.2889
2.1804 3.98 900 2.1462
1.0261 4.2 950 2.9056
1.392 4.42 1000 3.0021
1.3816 4.65 1050 2.6913
1.0117 4.87 1100 2.8484
1.0094 5.09 1150 2.6936
0.7316 5.31 1200 2.9901
0.9172 5.53 1250 2.6366
0.8608 5.75 1300 2.8584
0.7116 5.97 1350 3.1944
0.321 6.19 1400 3.4703
0.6663 6.42 1450 3.0456
0.6319 6.64 1500 3.3318
0.7001 6.86 1550 3.1439
0.5952 7.08 1600 3.3220
0.39 7.3 1650 3.8266
0.434 7.52 1700 3.8287
0.7599 7.74 1750 3.4079
0.52 7.96 1800 3.3982
0.5257 8.19 1850 3.5208
0.4304 8.41 1900 3.8404
0.4213 8.63 1950 3.9974
0.3033 8.85 2000 3.9492
0.2947 9.07 2050 3.9279
0.2285 9.29 2100 3.5652
0.3472 9.51 2150 3.5741
0.2644 9.73 2200 3.8685
0.3667 9.96 2250 3.5242
0.1528 10.18 2300 3.5848
0.1489 10.4 2350 3.8603
0.1984 10.62 2400 3.6773
0.3131 10.84 2450 3.7021
0.1866 11.06 2500 3.8918
0.1908 11.28 2550 3.9479
0.1955 11.5 2600 3.9596
0.1382 11.73 2650 4.1168
0.2528 11.95 2700 4.1007
0.0538 12.17 2750 4.2003
0.1354 12.39 2800 4.3118
0.1218 12.61 2850 4.1494
0.1956 12.83 2900 4.1475
0.0691 13.05 2950 4.4141
0.0526 13.27 3000 4.7115
0.0984 13.5 3050 4.6013
0.1828 13.72 3100 4.2457
0.0906 13.94 3150 4.4969
0.025 14.16 3200 4.6981
0.0149 14.38 3250 4.8642
0.123 14.6 3300 4.5326
0.0876 14.82 3350 4.5953
0.0771 15.04 3400 4.4175
0.066 15.27 3450 4.6324
0.0542 15.49 3500 4.5058
0.0293 15.71 3550 4.7244
0.0428 15.93 3600 4.9415
0.009 16.15 3650 4.9592
0.0715 16.37 3700 4.9211
0.0044 16.59 3750 4.9854
0.0767 16.81 3800 4.7985
0.0356 17.04 3850 4.7618
0.0562 17.26 3900 4.9239
0.0085 17.48 3950 4.9837
0.0114 17.7 4000 5.0808
0.0057 17.92 4050 5.0377
0.0306 18.14 4100 5.0137
0.0426 18.36 4150 4.9367
0.0429 18.58 4200 5.0050
0.0081 18.81 4250 4.9806
0.0168 19.03 4300 4.9902
0.0074 19.25 4350 4.9939
0.0075 19.47 4400 4.9986
0.0307 19.69 4450 5.0095
0.02 19.91 4500 5.0085

Framework versions

  • Transformers 4.29.1
  • Pytorch 1.12.1
  • Datasets 2.11.0
  • Tokenizers 0.11.0
Downloads last month
8
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.