Create README.md
Browse files
README.md
ADDED
@@ -0,0 +1,44 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
viewer: true
|
3 |
+
---
|
4 |
+
# MultiMed: Multilingual Medical Speech Recognition via Attention Encoder Decoder
|
5 |
+
|
6 |
+
## Description:
|
7 |
+
Multilingual automatic speech recognition (ASR) in the medical domain serves as a foundational task for various downstream applications such as speech translation, spoken language understanding, and voice-activated assistants.
|
8 |
+
This technology enhances patient care by enabling efficient communication across language barriers, alleviating specialized workforce shortages, and facilitating improved diagnosis and treatment, particularly during pandemics.
|
9 |
+
In this work, we introduce \textit{MultiMed}, a collection of small-to-large end-to-end ASR models for the medical domain, spanning five languages: Vietnamese, English, German, French, and Mandarin Chinese, together with the corresponding real-world ASR dataset.
|
10 |
+
To our best knowledge, \textit{MultiMed} stands as the largest and the first multilingual medical ASR dataset, in terms of total duration, number of speakers, diversity of diseases, recording conditions, speaker roles, unique medical terms, accents, and ICD-10 codes.
|
11 |
+
|
12 |
+
|
13 |
+
Please cite this paper: https://arxiv.org/abs/2404.05659
|
14 |
+
|
15 |
+
@inproceedings{VietMed_dataset,
|
16 |
+
title={VietMed: A Dataset and Benchmark for Automatic Speech Recognition of Vietnamese in the Medical Domain},
|
17 |
+
author={Khai Le-Duc},
|
18 |
+
year={2024},
|
19 |
+
booktitle = {Proceedings of the Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)},
|
20 |
+
}
|
21 |
+
To load labeled data, please refer to our [HuggingFace](https://huggingface.co/datasets/leduckhai/VietMed), [Paperswithcodes](https://paperswithcode.com/dataset/vietmed).
|
22 |
+
|
23 |
+
For full dataset (labeled data + unlabeled data) and pre-trained models, please refer to [Google Drive](https://drive.google.com/drive/folders/1hsoB_xjWh66glKg3tQaSLm4S1SVPyANP?usp=sharing)
|
24 |
+
|
25 |
+
## Limitations:
|
26 |
+
|
27 |
+
Since this dataset is human-labeled, 1-2 ending/starting words present in the recording might not be present in the transcript.
|
28 |
+
That's the nature of human-labeled dataset, in which humans can't distinguish words that are faster than 1 second.
|
29 |
+
In contrast, forced alignment could solve this problem because machines can "listen" words in 10ms-20ms.
|
30 |
+
However, forced alignment only learns what it is taught by humans.
|
31 |
+
Therefore, no transcript is perfect. We will conduct human-machine collaboration to get "more perfect" transcript in the next paper.
|
32 |
+
|
33 |
+
## Contact:
|
34 |
+
|
35 |
+
If any links are broken, please contact me for fixing!
|
36 |
+
|
37 |
+
Thanks [Phan Phuc](https://www.linkedin.com/in/pphuc/) for dataset viewer <3
|
38 |
+
|
39 |
+
```
|
40 |
+
Le Duc Khai
|
41 |
+
University of Toronto, Canada
|
42 |
+
Email: [email protected]
|
43 |
+
GitHub: https://github.com/leduckhai
|
44 |
+
```
|