Update README.md
Browse files
README.md
CHANGED
@@ -19,7 +19,7 @@ Our demo is available [here](https://huggingface.co/spaces/pyf98/OWSM_v3_demo).
|
|
19 |
[OWSM v3.1](https://arxiv.org/abs/2401.16658) is an improved version of OWSM v3. It significantly outperforms OWSM v3 in almost all evaluation benchmarks.
|
20 |
We do not include any new training data. Instead, we utilize a state-of-the-art speech encoder, [E-Branchformer](https://arxiv.org/abs/2210.00077).
|
21 |
|
22 |
-
This is a small size model with 367M parameters and is trained on 70k hours of public speech data with lower restrictions (compared to the full OWSM data)
|
23 |
Specifically, it supports the following speech-to-text tasks:
|
24 |
- Speech recognition
|
25 |
- Utterance-level alignment
|
|
|
19 |
[OWSM v3.1](https://arxiv.org/abs/2401.16658) is an improved version of OWSM v3. It significantly outperforms OWSM v3 in almost all evaluation benchmarks.
|
20 |
We do not include any new training data. Instead, we utilize a state-of-the-art speech encoder, [E-Branchformer](https://arxiv.org/abs/2210.00077).
|
21 |
|
22 |
+
**This is a small size model with 367M parameters and is trained on 70k hours of public speech data with lower restrictions (compared to the full OWSM data).** Please check our [project page](https://www.wavlab.org/activities/2024/owsm/) for more information.
|
23 |
Specifically, it supports the following speech-to-text tasks:
|
24 |
- Speech recognition
|
25 |
- Utterance-level alignment
|