Whisper Small Serbian

This model is a fine-tuned version of openai/whisper-small on the mozilla-foundation/common_voice_11_0 sr dataset. It achieves the following results on the evaluation set:

  • Loss: 0.5670
  • Wer: 22.9057

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

The model was trained on the training + validation splits of the Serbian split of the Common Voice dataset and evaluated on the test split of the Serbian split from the Common Voice dataset.

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 1e-05
  • train_batch_size: 64
  • eval_batch_size: 32
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 500
  • training_steps: 800
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Wer
0.4257 5.0 100 0.4377 32.4160
0.0779 10.0 200 0.3928 23.7556
0.0108 15.0 300 0.4856 23.4318
0.0104 20.0 400 0.5637 25.4958
0.0069 25.0 500 0.5289 23.1485
0.0022 30.0 600 0.5670 22.9057
0.0012 35.0 700 0.5746 23.0271
0.0006 40.0 800 0.5810 23.1890

Framework versions

  • Transformers 4.26.0.dev0
  • Pytorch 1.13.1+cu117
  • Datasets 2.8.1.dev0
  • Tokenizers 0.13.2
Downloads last month
8
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Dataset used to train asusevski/whisper-small-sr

Evaluation results