wav2vec2_base_vietnamese_control_dataset_75_epochs

This model is a fine-tuned version of nguyenvulebinh/wav2vec2-base-vi on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 0.0584
  • Wer: 0.1728

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 3e-05
  • train_batch_size: 32
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 1000
  • num_epochs: 75

Training results

Training Loss Epoch Step Validation Loss Wer
16.8485 3.85 500 15.0708 1.0
7.7941 7.69 1000 5.1283 1.0
3.4391 11.54 1500 3.2167 1.0
3.116 15.38 2000 3.1172 1.0
3.0946 19.23 2500 3.1165 1.0
3.0478 23.08 3000 2.9426 1.0
2.4567 26.92 3500 1.6121 0.9993
1.2305 30.77 4000 0.5487 0.3630
0.5512 34.62 4500 0.2505 0.2364
0.3234 38.46 5000 0.1572 0.1869
0.2232 42.31 5500 0.1170 0.1782
0.1779 46.15 6000 0.0939 0.1772
0.145 50.0 6500 0.0830 0.1763
0.1383 53.85 7000 0.0739 0.1756
0.1161 57.69 7500 0.0681 0.1736
0.1054 61.54 8000 0.0630 0.1729
0.0958 65.38 8500 0.0620 0.1729
0.1028 69.23 9000 0.0582 0.1726
0.1015 73.08 9500 0.0584 0.1728

Framework versions

  • Transformers 4.32.1
  • Pytorch 2.0.1+cu117
  • Datasets 2.14.4
  • Tokenizers 0.13.3
Downloads last month
13
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for tuanmanh28/wav2vec2-base-vietnamese-control-dataset-75-epochs

Finetuned
(10)
this model