Edit model card

whisper-ai-clp

This model is a fine-tuned version of openai/whisper-small on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 0.0000
  • Wer: 12.2249

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0004
  • train_batch_size: 8
  • eval_batch_size: 8
  • seed: 42
  • gradient_accumulation_steps: 2
  • total_train_batch_size: 16
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 132
  • num_epochs: 30
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Wer
1.1763 1.2422 100 0.0155 10.5134
0.1632 2.4845 200 0.0289 19.5599
0.0783 3.7267 300 0.0152 13.0807
0.0676 4.9689 400 0.1121 33.6186
0.0708 6.2112 500 0.0014 11.3692
0.0377 7.4534 600 0.0041 8.5575
0.0323 8.6957 700 0.0001 19.0709
0.0311 9.9379 800 0.0021 14.4254
0.0236 11.1801 900 0.0001 16.9927
0.0189 12.4224 1000 0.0027 13.9364
0.01 13.6646 1100 0.0002 9.9022
0.0071 14.9068 1200 0.0022 16.6259
0.0085 16.1491 1300 0.0002 11.3692
0.0043 17.3913 1400 0.0013 14.3032
0.0056 18.6335 1500 0.0001 9.7800
0.0021 19.8758 1600 0.0046 10.6357
0.0013 21.1180 1700 0.0019 12.8362
0.0006 22.3602 1800 0.0000 11.7359
0.0001 23.6025 1900 0.0000 12.1027
0.0 24.8447 2000 0.0000 12.2249
0.0 26.0870 2100 0.0000 12.2249
0.0 27.3292 2200 0.0000 12.2249
0.0 28.5714 2300 0.0000 12.2249
0.0 29.8137 2400 0.0000 12.2249

Framework versions

  • Transformers 4.45.0.dev0
  • Pytorch 2.4.0
  • Datasets 2.21.0
  • Tokenizers 0.19.1
Downloads last month
0
Safetensors
Model size
242M params
Tensor type
F32
·
Inference Examples
Inference API (serverless) is not available, repository is disabled.

Model tree for susmitabhatt/whisper-ai-clp

Finetuned
this model