Visualize in Weights & Biases

Medical_Whisper_large_1.5b

This model is a fine-tuned version of openai/whisper-large-v3 on the primock_data dataset.

Model description

Fine tuned version of whisper-large-v3 through transfer learning on Doctor/Patient consultations. This version in the ONNX format in fp32 precision. Stay tuned for instructions on how to run this pipeline in OnnxRuntime!

Intended uses & limitations

Medical transcription

Training and evaluation data

Na0s/Primock_med

Training procedure

Exhaustive transfer learning

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 1e-05
  • train_batch_size: 6
  • eval_batch_size: 6
  • seed: 42
  • gradient_accumulation_steps: 4
  • total_train_batch_size: 24
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: constant_with_warmup
  • lr_scheduler_warmup_steps: 50
  • training_steps: 500
  • mixed_precision_training: Native AMP

Performance Overview:

| Model Name WER CER Number of Parameters
Whisper Tiny 0.46 0.27 39M
Whisper Base 0.42 0.26 74M
Whisper Small 0.39 0.26 244M
Whisper Medium 0.37 0.23 769M
Whisper Large v3 0.33 0.18 1.55B
Whisper Medical 0.19 0.10 1.55B

Performance of foundation Whispers vs Medical Whisper on the Validation set.

Model Name WER CER Number of Parameters
Whisper Medical 0.24 0.13 1.55B

Table: Performance of Medical Whisper on the Test set.

Framework versions

  • Transformers 4.42.4
  • Pytorch 2.3.1+cu121
  • Datasets 2.20.0
  • Tokenizers 0.19.1
Downloads last month
1
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and HF Inference API was unable to determine this model's library.

Model tree for Esperanto/Medical-Whisper-large-kvc-fp32-onnx

Quantized
(1)
this model

Dataset used to train Esperanto/Medical-Whisper-large-kvc-fp32-onnx

Collection including Esperanto/Medical-Whisper-large-kvc-fp32-onnx