finetune_colpali_v1_2-german_ver2-8bit

This model is a fine-tuned version of vidore/colpaligemma-3b-pt-448-base on the German_docx dataset. It achieves the following results on the evaluation set:

  • Loss: 0.1034
  • Model Preparation Time: 0.0101

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-05
  • train_batch_size: 4
  • eval_batch_size: 4
  • seed: 42
  • gradient_accumulation_steps: 4
  • total_train_batch_size: 16
  • optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 100
  • num_epochs: 10

Training results

Training Loss Epoch Step Validation Loss Model Preparation Time
No log 0.0816 1 0.4123 0.0101
1.2101 0.8163 10 0.3859 0.0101
1.2237 1.6327 20 0.3079 0.0101
0.6883 2.4490 30 0.2464 0.0101
0.4804 3.2653 40 0.2011 0.0101
0.5187 4.0816 50 0.1792 0.0101
0.3899 4.8980 60 0.1563 0.0101
0.203 5.7143 70 0.1307 0.0101
0.1897 6.5306 80 0.0990 0.0101
0.3326 7.3469 90 0.1246 0.0101
0.3578 8.1633 100 0.1429 0.0101
0.3568 8.9796 110 0.1148 0.0101
0.1411 9.7959 120 0.1034 0.0101

Framework versions

  • Transformers 4.46.1
  • Pytorch 2.3.1
  • Datasets 3.1.0
  • Tokenizers 0.20.1
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference API
Unable to determine this model’s pipeline type. Check the docs .

Model tree for svenbl80/finetune_colpali_v1_2-german_ver2-8bit

Finetuned
(24)
this model