swinv2-base-panorama-IQA

This model is a fine-tuned version of microsoft/swinv2-base-patch4-window8-256 on the isiqa-2019-hf dataset. It achieves the following results on the evaluation set:

  • Loss: 0.0312
  • Srocc: 0.1132
  • Lcc: 0.1583

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 1e-05
  • train_batch_size: 16
  • eval_batch_size: 32
  • seed: 10
  • gradient_accumulation_steps: 4
  • total_train_batch_size: 64
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: cosine
  • lr_scheduler_warmup_ratio: 0.1
  • num_epochs: 50.0

Training results

Training Loss Epoch Step Validation Loss Srocc Lcc
No log 0.8571 3 0.3021 -0.1668 -0.1392
No log 2.0 7 0.1286 -0.1808 -0.1347
0.2494 2.8571 10 0.0678 -0.1784 -0.1273
0.2494 4.0 14 0.1143 -0.1625 -0.1114
0.2494 4.8571 17 0.0686 -0.1939 -0.1152
0.069 6.0 21 0.0572 -0.2063 -0.1376
0.069 6.8571 24 0.0537 -0.1965 -0.1405
0.069 8.0 28 0.0671 -0.1794 -0.1289
0.0276 8.8571 31 0.0551 -0.1443 -0.1164
0.0276 10.0 35 0.0492 -0.1110 -0.0948
0.0276 10.8571 38 0.0465 -0.0945 -0.0767
0.0181 12.0 42 0.0449 -0.0830 -0.0464
0.0181 12.8571 45 0.0402 -0.0659 -0.0280
0.0181 14.0 49 0.0389 -0.0411 -0.0117
0.0128 14.8571 52 0.0380 -0.0348 -0.0055
0.0128 16.0 56 0.0371 -0.0232 0.0088
0.0128 16.8571 59 0.0360 0.0048 0.0205
0.0112 18.0 63 0.0354 0.0128 0.0385
0.0112 18.8571 66 0.0352 0.0197 0.0509
0.0088 20.0 70 0.0346 0.0331 0.0670
0.0088 20.8571 73 0.0337 0.0412 0.0801
0.0088 22.0 77 0.0347 0.0396 0.0879
0.008 22.8571 80 0.0348 0.0512 0.0954
0.008 24.0 84 0.0339 0.0643 0.1071
0.008 24.8571 87 0.0332 0.0765 0.1143
0.0066 26.0 91 0.0334 0.0855 0.1240
0.0066 26.8571 94 0.0330 0.0938 0.1292
0.0066 28.0 98 0.0317 0.0997 0.1381
0.006 28.8571 101 0.0314 0.1087 0.1432
0.006 30.0 105 0.0317 0.1053 0.1446
0.006 30.8571 108 0.0317 0.0971 0.1465
0.0062 32.0 112 0.0315 0.1032 0.1496
0.0062 32.8571 115 0.0315 0.1032 0.1511
0.0062 34.0 119 0.0314 0.1032 0.1533
0.0057 34.8571 122 0.0314 0.1094 0.1543
0.0057 36.0 126 0.0313 0.1091 0.1558
0.0057 36.8571 129 0.0312 0.1132 0.1570
0.006 38.0 133 0.0312 0.1132 0.1577
0.006 38.8571 136 0.0312 0.1132 0.1581
0.0058 40.0 140 0.0312 0.1132 0.1583
0.0058 40.8571 143 0.0312 0.1132 0.1584
0.0058 42.0 147 0.0312 0.1132 0.1584
0.006 42.8571 150 0.0312 0.1132 0.1584

Framework versions

  • Transformers 4.42.3
  • Pytorch 2.1.2
  • Datasets 2.20.0
  • Tokenizers 0.19.1
Downloads last month
19
Safetensors
Model size
86.9M params
Tensor type
F32
·
Inference Examples
Unable to determine this model's library. Check the docs .

Model tree for DiTo97/swinv2-base-panorama-IQA

Finetuned
(11)
this model