leduckhai commited on
Commit
ae6bba9
1 Parent(s): 56ee00d

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +7 -16
README.md CHANGED
@@ -160,25 +160,16 @@ In this work, we introduce *MultiMed*, a collection of small-to-large end-to-end
160
  To our best knowledge, *MultiMed* stands as **the largest and the first multilingual medical ASR dataset**, in terms of total duration, number of speakers, diversity of diseases, recording conditions, speaker roles, unique medical terms, accents, and ICD-10 codes.
161
 
162
 
163
- Please cite this paper: **TODO**
164
 
165
- @inproceedings{**TODO**,
166
- title={**TODO**},
167
- author={Khai Le-Duc},
168
- year={2024},
169
- booktitle = {**TODO**},
170
  }
171
- **TODO** To load labeled data, please refer to our [HuggingFace](https://huggingface.co/datasets/leduckhai/VietMed), [Paperswithcodes](https://paperswithcode.com/dataset/vietmed).
172
 
173
- **TODO** For full dataset (labeled data + unlabeled data) and pre-trained models, please refer to [Google Drive](https://drive.google.com/drive/folders/1hsoB_xjWh66glKg3tQaSLm4S1SVPyANP?usp=sharing)
174
-
175
- ## Limitations:
176
-
177
- **TODO** Since this dataset is human-labeled, 1-2 ending/starting words present in the recording might not be present in the transcript.
178
- That's the nature of human-labeled dataset, in which humans can't distinguish words that are faster than 1 second.
179
- In contrast, forced alignment could solve this problem because machines can "listen" words in 10ms-20ms.
180
- However, forced alignment only learns what it is taught by humans.
181
- Therefore, no transcript is perfect. We will conduct human-machine collaboration to get "more perfect" transcript in the next paper.
182
 
183
  ## Contact:
184
 
 
160
  To our best knowledge, *MultiMed* stands as **the largest and the first multilingual medical ASR dataset**, in terms of total duration, number of speakers, diversity of diseases, recording conditions, speaker roles, unique medical terms, accents, and ICD-10 codes.
161
 
162
 
163
+ Please cite this paper: [https://arxiv.org/abs/2409.14074](https://arxiv.org/abs/2409.14074)
164
 
165
+ @inproceedings{le2024multimed,
166
+ title={MultiMed: Multilingual Medical Speech Recognition via Attention Encoder Decoder},
167
+ author={Le-Duc, Khai and Phan, Phuc and Pham, Tan-Hanh and Tat, Bach Phan and Ngo, Minh-Huong and Hy, Truong-Son},
168
+ journal={arXiv preprint arXiv:2409.14074},
169
+ year={2024}
170
  }
 
171
 
172
+ To load labeled data, please refer to our [HuggingFace](https://huggingface.co/datasets/leduckhai/MultiMed), [Paperswithcodes](https://paperswithcode.com/dataset/multimed).
 
 
 
 
 
 
 
 
173
 
174
  ## Contact:
175