--- viewer: true dataset_info: - config_name: Chinese features: - name: audio dtype: audio: sampling_rate: 16000 - name: text dtype: string - name: duration dtype: float64 splits: - name: train num_bytes: 182566135.142 num_examples: 1242 - name: eval num_bytes: 12333509.0 num_examples: 91 - name: test num_bytes: 33014034.0 num_examples: 225 download_size: 227567289 dataset_size: 227913678.142 - config_name: English features: - name: audio dtype: audio: sampling_rate: 16000 - name: text dtype: string - name: duration dtype: float64 splits: - name: train num_bytes: 2789314997.152 num_examples: 25512 - name: eval num_bytes: 299242087.632 num_examples: 2816 - name: test num_bytes: 553873172.749 num_examples: 4751 download_size: 3627859275 dataset_size: 3642430257.533 - config_name: French features: - name: audio dtype: audio: sampling_rate: 16000 - name: text dtype: string - name: duration dtype: float64 splits: - name: train num_bytes: 168642145.231 num_examples: 1403 - name: eval num_bytes: 5164908.0 num_examples: 42 - name: test num_bytes: 42780388.0 num_examples: 344 download_size: 216118671 dataset_size: 216587441.231 - config_name: German features: - name: audio dtype: audio - name: text dtype: string - name: duration dtype: float64 splits: - name: train num_bytes: 181312217.029 num_examples: 1443 - name: test num_bytes: 137762006.256 num_examples: 1091 - name: eval num_bytes: 35475098.0 num_examples: 287 download_size: 354494147 dataset_size: 354549321.285 - config_name: Vietnamese features: - name: audio dtype: audio - name: text dtype: string - name: duration dtype: float64 splits: - name: train num_bytes: 56584901.453 num_examples: 2773 - name: test num_bytes: 69598082.31 num_examples: 3437 - name: dev num_bytes: 57617298.896 num_examples: 2912 download_size: 181789393 dataset_size: 183800282.659 configs: - config_name: Chinese data_files: - split: train path: Chinese/train-* - split: eval path: Chinese/eval-* - split: test path: Chinese/test-* - config_name: English data_files: - split: train path: English/train-* - split: eval path: English/eval-* - split: test path: English/test-* - config_name: French data_files: - split: train path: French/train-* - split: eval path: French/eval-* - split: test path: French/test-* - config_name: German data_files: - split: train path: German/train-* - split: test path: German/test-* - split: eval path: German/eval-* - config_name: Vietnamese data_files: - split: train path: Vietnamese/train-* - split: test path: Vietnamese/test-* - split: dev path: Vietnamese/dev-* --- # MultiMed: Multilingual Medical Speech Recognition via Attention Encoder Decoder ## Description: Multilingual automatic speech recognition (ASR) in the medical domain serves as a foundational task for various downstream applications such as speech translation, spoken language understanding, and voice-activated assistants. This technology enhances patient care by enabling efficient communication across language barriers, alleviating specialized workforce shortages, and facilitating improved diagnosis and treatment, particularly during pandemics. In this work, we introduce *MultiMed*, a collection of small-to-large end-to-end ASR models for the medical domain, spanning five languages: Vietnamese, English, German, French, and Mandarin Chinese, together with the corresponding real-world ASR dataset. To our best knowledge, *MultiMed* stands as **the largest and the first multilingual medical ASR dataset**, in terms of total duration, number of speakers, diversity of diseases, recording conditions, speaker roles, unique medical terms, accents, and ICD-10 codes. Please cite this paper: [https://arxiv.org/abs/2409.14074](https://arxiv.org/abs/2409.14074) @inproceedings{le2024multimed, title={MultiMed: Multilingual Medical Speech Recognition via Attention Encoder Decoder}, author={Le-Duc, Khai and Phan, Phuc and Pham, Tan-Hanh and Tat, Bach Phan and Ngo, Minh-Huong and Hy, Truong-Son}, journal={arXiv preprint arXiv:2409.14074}, year={2024} } To load labeled data, please refer to our [HuggingFace](https://huggingface.co./datasets/leduckhai/MultiMed), [Paperswithcodes](https://paperswithcode.com/dataset/multimed). ## Contact: If any links are broken, please contact me for fixing! Thanks [Phan Phuc](https://www.linkedin.com/in/pphuc/) for dataset viewer <3 ``` Le Duc Khai University of Toronto, Canada Email: duckhai.le@mail.utoronto.ca GitHub: https://github.com/leduckhai ```