LZR9926 commited on
Commit
77135a0
·
verified ·
1 Parent(s): a59927a

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +217 -5
README.md CHANGED
@@ -1,5 +1,217 @@
1
- ---
2
- license: other
3
- license_name: model-license
4
- license_link: https://github.com/modelscope/FunASR/blob/main/MODEL_LICENSE
5
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: other
3
+ license_name: model-license
4
+ license_link: https://github.com/modelscope/FunASR/blob/main/MODEL_LICENSE
5
+ ---
6
+
7
+ # Introduction
8
+
9
+ SenseVoice is a speech foundation model with multiple speech understanding capabilities, including automatic speech recognition (ASR), spoken language identification (LID), speech emotion recognition (SER), and audio event detection (AED).
10
+
11
+ <div align="center"><img src="image/sensevoice2.png" width="1000"/> </div>
12
+
13
+
14
+ # Highlights
15
+ **SenseVoice** focuses on high-accuracy multilingual speech recognition, speech emotion recognition, and audio event detection.
16
+ - **Multilingual Speech Recognition:** Trained with over 400,000 hours of data, supporting more than 50 languages, the recognition performance surpasses that of the Whisper model.
17
+ - **Rich transcribe:**
18
+ - Possess excellent emotion recognition capabilities, achieving and surpassing the effectiveness of the current best emotion recognition models on test data.
19
+ - Offer sound event detection capabilities, supporting the detection of various common human-computer interaction events such as bgm, applause, laughter, crying, coughing, and sneezing.
20
+ - **Efficient Inference:** The SenseVoice-Small model utilizes a non-autoregressive end-to-end framework, leading to exceptionally low inference latency. It requires only 70ms to process 10 seconds of audio, which is 15 times faster than Whisper-Large.
21
+ - **Convenient Finetuning:** Provide convenient finetuning scripts and strategies, allowing users to easily address long-tail sample issues according to their business scenarios.
22
+ - **Service Deployment:** Offer service deployment pipeline, supporting multi-concurrent requests, with client-side languages including Python, C++, HTML, Java, and C#, among others.
23
+
24
+
25
+ ## <strong>[SenseVoice Project]()</strong>
26
+ <strong>[SenseVoice]()</strong>is a speech foundation model with multiple speech understanding capabilities, including automatic speech recognition (ASR), spoken language identification (LID), speech emotion recognition (SER), and acoustic event detection (AED).
27
+
28
+ [**github**]()
29
+ | [**What's New**]()
30
+ | [**Requirements**]()
31
+
32
+
33
+ # SenseVoice Model
34
+ SenseVoice-Small is an encoder-only speech foundation model designed for rapid voice understanding. It encompasses a variety of features including automatic speech recognition (ASR), spoken language identification (LID), speech emotion recognition (SER), acoustic event detection (AED), and Inverse Text Normalization (ITN). SenseVoice-Small supports multilingual recognition for Chinese, English, Cantonese, Japanese, and Korean.
35
+
36
+
37
+ <p align="center">
38
+ <img src="fig/sensevoice.png" width="1500" />
39
+ </p>
40
+
41
+ The SenseVoice-Small model is based on a non-autoregressive end-to-end framework. For a specified task, we prepend four embeddings as input to the encoder:
42
+
43
+ LID: For predicting the language id of the audio.
44
+ SER: For predicting the emotion label of the audio.
45
+ AED: For predicting the event label of the audio.
46
+ ITN: Used to specify whether the recognition output text is subjected to inverse text normalization.
47
+
48
+ # Usage
49
+
50
+ ## Inference
51
+
52
+ ### Method 1
53
+
54
+ ```python
55
+ from model import SenseVoiceSmall
56
+
57
+ model_dir = "iic/SenseVoiceSmall"
58
+ m, kwargs = SenseVoiceSmall.from_pretrained(model=model_dir)
59
+
60
+
61
+ res = m.inference(
62
+ data_in="https://isv-data.oss-cn-hangzhou.aliyuncs.com/ics/MaaS/ASR/test_audio/asr_example_zh.wav",
63
+ language="auto", # "zn", "en", "yue", "ja", "ko", "nospeech"
64
+ use_itn=False,
65
+ **kwargs,
66
+ )
67
+
68
+ print(res)
69
+ ```
70
+
71
+ ### Method 2
72
+
73
+ ```python
74
+ from funasr import AutoModel
75
+
76
+ model_dir = "iic/SenseVoiceSmall"
77
+ input_file = (
78
+ "https://isv-data.oss-cn-hangzhou.aliyuncs.com/ics/MaaS/ASR/test_audio/asr_example_zh.wav"
79
+ )
80
+
81
+ model = AutoModel(model=model_dir,
82
+ vad_model="fsmn-vad",
83
+ vad_kwargs={"max_single_segment_time": 30000},
84
+ trust_remote_code=True, device="cuda:0")
85
+
86
+ res = model.generate(
87
+ input=input_file,
88
+ cache={},
89
+ language="auto", # "zn", "en", "yue", "ja", "ko", "nospeech"
90
+ use_itn=False,
91
+ batch_size_s=0,
92
+ )
93
+
94
+ print(res)
95
+ ```
96
+
97
+ The funasr version has integrated the VAD (Voice Activity Detection) model and supports audio input of any duration, with `batch_size_s` in seconds.
98
+ If all inputs are short audios, and batch inference is needed to speed up inference efficiency, the VAD model can be removed, and `batch_size` can be set accordingly.
99
+
100
+ ```python
101
+ model = AutoModel(model=model_dir, trust_remote_code=True, device="cuda:0")
102
+
103
+ res = model.generate(
104
+ input=input_file,
105
+ cache={},
106
+ language="auto", # "zn", "en", "yue", "ja", "ko", "nospeech"
107
+ use_itn=False,
108
+ batch_size=64,
109
+ )
110
+ ```
111
+
112
+ For more usage, please ref to [docs](https://github.com/modelscope/FunASR/blob/main/docs/tutorial/README.md)
113
+
114
+
115
+ ### Export and Test
116
+
117
+ ```python
118
+ # pip3 install -U funasr-onnx
119
+ from funasr_onnx import SenseVoiceSmall
120
+
121
+ model_dir = "iic/SenseVoiceCTC"
122
+ model = SenseVoiceSmall(model_dir, batch_size=1, quantize=True)
123
+
124
+ wav_path = [f'~/.cache/modelscope/hub/{model_dir}/example/asr_example.wav']
125
+
126
+ result = model(wav_path)
127
+ print(result)
128
+ ```
129
+
130
+
131
+ ## Service
132
+
133
+ Undo
134
+
135
+
136
+ ## Finetune
137
+
138
+ ### Requirements
139
+
140
+ ```shell
141
+ git clone https://github.com/alibaba/FunASR.git && cd FunASR
142
+ pip3 install -e ./
143
+ ```
144
+
145
+ ### Data prepare
146
+
147
+ Data examples
148
+
149
+ ```text
150
+ {"key": "YOU0000008470_S0000238_punc_itn", "text_language": "<|en|>", "emo_target": "<|NEUTRAL|>", "event_target": "<|Speech|>", "with_or_wo_itn": "<|withitn|>", "target": "Including legal due diligence, subscription agreement, negotiation.", "source": "/cpfs01/shared/Group-speech/beinian.lzr/data/industrial_data/english_all/audio/YOU0000008470_S0000238.wav", "target_len": 7, "source_len": 140}
151
+ {"key": "AUD0000001556_S0007580", "text_language": "<|en|>", "emo_target": "<|NEUTRAL|>", "event_target": "<|Speech|>", "with_or_wo_itn": "<|woitn|>", "target": "there is a tendency to identify the self or take interest in what one has got used to", "source": "/cpfs01/shared/Group-speech/beinian.lzr/data/industrial_data/english_all/audio/AUD0000001556_S0007580.wav", "target_len": 18, "source_len": 360}
152
+ ```
153
+
154
+ Full ref to `data/train_example.jsonl`
155
+
156
+ ### Finetune
157
+
158
+ Ensure to modify the train_tool in finetune.sh to the absolute path of `funasr/bin/train_ds.py` from the FunASR installation directory you have set up earlier.
159
+
160
+ ```shell
161
+ bash finetune.sh
162
+ ```
163
+
164
+ ## WebUI
165
+
166
+ ```shell
167
+ python webui.py
168
+ ```
169
+
170
+ <div align="center"><img src="image/webui.png" width="700"/> </div>
171
+
172
+ <a name="Community"></a>
173
+ # Community
174
+
175
+
176
+ # Performance
177
+
178
+ ## Multilingual Speech Recognition
179
+
180
+ We compared the performance of multilingual speech recognition between SenseVoice and Whisper on open-source benchmark datasets, including AISHELL-1, AISHELL-2, Wenetspeech, LibriSpeech, and Common Voice. n terms of Chinese and Cantonese recognition, the SenseVoice-Small model has advantages.
181
+
182
+ <div align="center">
183
+ <img src="image/asr_results.png" width="1000" />
184
+ </div>
185
+
186
+
187
+
188
+ ## Speech Emotion Recognition
189
+
190
+ Due to the current lack of widely-used benchmarks and methods for speech emotion recognition, we conducted evaluations across various metrics on multiple test sets and performed a comprehensive comparison with numerous results from recent benchmarks. The selected test sets encompass data in both Chinese and English, and include multiple styles such as performances, films, and natural conversations. Without finetuning on the target data, SenseVoice was able to achieve and exceed the performance of the current best speech emotion recognition models.
191
+
192
+ <div align="center">
193
+ <img src="image/ser_table.png" width="1000" />
194
+ </div>
195
+
196
+ Furthermore, we compared multiple open-source speech emotion recognition models on the test sets, and the results indicate that the SenseVoice-Large model achieved the best performance on nearly all datasets, while the SenseVoice-Small model also surpassed other open-source models on the majority of the datasets.
197
+
198
+ <div align="center">
199
+ <img src="image/ser_figure.png" width="500" />
200
+ </div>
201
+
202
+ ## Audio Event Detection
203
+
204
+ Although trained exclusively on speech data, SenseVoice can still function as a standalone event detection model. We compared its performance on the environmental sound classification ESC-50 dataset against the widely used industry models BEATS and PANN. The SenseVoice model achieved commendable results on these tasks. However, due to limitations in training data and methodology, its event classification performance has some gaps compared to specialized AED models.
205
+
206
+ <div align="center">
207
+ <img src="image/aed_figure.png" width="500" />
208
+ </div>
209
+
210
+
211
+ ## Computational Efficiency
212
+
213
+ The SenseVoice-Small model non-autoregressive end-to-end architecture, resulting in extremely low inference latency. With a similar number of parameters to the Whisper-Small model, it infers 7 times faster than Whisper-Small and 17 times faster than Whisper-Large.
214
+
215
+ <div align="center">
216
+ <img src="image/inference.png" width="1000" />
217
+ </div>