Datasets:

Modalities:
Text
Formats:
csv
Size:
< 1K
ArXiv:
Libraries:
Datasets
pandas
License:
Name
stringlengths
4
14
Processor ⚑
stringclasses
4 values
Phonetic alphabet πŸ”€
stringclasses
4 values
Insta-clone πŸ‘₯
stringclasses
4 values
Emotion control 🎭
stringclasses
7 values
Prompting πŸ“–
stringclasses
4 values
Speech control 🎚
stringclasses
9 values
Streaming support 🌊
stringclasses
3 values
Voice conversion 🦜
stringclasses
3 values
Longform synthesis πŸ“œ
stringclasses
5 values
xVASynth
CPU / CUDA
ARPAbet+
❌
4-type 🎭 πŸ˜‘πŸ˜ƒπŸ˜­πŸ˜― per‑phoneme
❌
🎚 speed, pitch, energy, 🎭; per‑phoneme
❌
🦜
~15s max
GPT-SoVITS
CUDA
IPA
πŸ‘₯
🎭πŸ‘₯
❌
🎚 speed, stability
❌
🦜
MetaVoice-1B
CUDA
πŸ‘₯
🎭πŸ‘₯
❌
🎚 stability, similarity
πŸ“œ
IMS-Toucan
CUDA
IPA
πŸ‘₯
❌
❌
🎚 speed, stability; per-phoneme
XTTS
CUDA
❌
πŸ‘₯
🎭πŸ‘₯
❌
🎚 speed, stability
🌊
❌
StyleTTS
CPUs / CUDA
IPA
πŸ‘₯
🎭πŸ‘₯
❌
🌊
πŸ“œ
HierSpeech++
❌
πŸ‘₯
🎭πŸ‘₯
❌
🎚 speed, stability
🦜
E2/F5 TTS
CUDA
πŸ‘₯
🎭πŸ‘₯
πŸ—£πŸ“–
🎚 speed
πŸ“œ / by ~30s
MaskGCT
CUDA
πŸ‘₯
🎭πŸ‘₯
❌
🎚 set duration
Fish Speech
CUDA
❌
πŸ‘₯
🎭πŸ‘₯
❌
🎚 stability
🌊
πŸ“œ
OpenVoice
CUDA
❌
πŸ‘₯
6-type 🎭 πŸ˜‘πŸ˜ƒπŸ˜­πŸ˜―πŸ€«πŸ˜Š
❌
WhisperSpeech
CUDA
❌
πŸ‘₯
🎭πŸ‘₯
❌
🎚 speed
MeloTTS
CPU / CUDA
❌
❌
🎚 speed
Matcha-TTS
IPA
❌
❌
❌
🎚 speed, stability
Parler
CUDA
❌
πŸŽ­πŸ“–
πŸ“–
❌
🌊
Pheme
CUDA
❌
πŸ‘₯
🎭πŸ‘₯
❌
🎚 stability
Piper
CPU / CUDA
❌
❌
❌
❌
🎚 speed, stability
❌
❌
TorToiSe TTS
❌
❌
❌
πŸ—£πŸ“–
🌊
Bark
CUDA
❌
🎭 tags
❌
πŸ“œ / by ~13s
TTTS
CPU / CUDA
❌
πŸ‘₯
🎭πŸ‘₯
❌
Amphion
CUDA
❌ fusion
❌
❌
🎚 speed
VITS/ MMS-TTS
CUDA
❌
❌
❌
🎚 speed
AI4Bharat
EmotiVoice
Glow-TTS
MahaTTS
Neural-HMM TTS
OverFlow TTS
pflowTTS
RAD-MMM
RAD-TTS
Silero
Tacotron
VALL-E

above models sorted by the amount of capabilities; #legend

Cloned the GitHub repo for easier viewing and embedding the above table as once requested by @reach-vb: https://github.com/Vaibhavs10/open-tts-tracker/issues/30#issuecomment-1946367525


πŸ—£οΈ Open TTS Tracker

A one stop shop to track all open-access/ source Text-To-Speech (TTS) models as they come out. Feel free to make a PR for all those that aren't linked here.

This is aimed as a resource to increase awareness for these models and to make it easier for researchers, developers, and enthusiasts to stay informed about the latest advancements in the field.

This repo will only track open source/access codebase TTS models. More motivation for everyone to open-source! πŸ€—

Some of the models are also being battle tested at TTS arenas:

  • πŸ† TTS Arena - Battle tab allows to choose 2 candidates and compare them
  • πŸ€—πŸ† TTS Spaces Arena - Uses online HuggingFace Spaces, which have Gradio API enabled
Name GitHub Weights License Fine-tune Languages Paper Demo Issues
AI4Bharat Repo Hub MIT Yes Indic Paper Demo
Amphion Repo Hub MIT No Multilingual Paper πŸ€— Space
Bark Repo Hub MIT No Multilingual Paper πŸ€— Space
EmotiVoice Repo GDrive Apache 2.0 Yes ZH + EN Not Available Not Available Separate GUI agreement
F5-TTS Repo Hub MIT Yes ZH + EN Paper πŸ€— Space
Fish Speech Repo Hub CC-BY-NC-SA 4.0 Yes Multilingual Not Available πŸ€— Space
Glow-TTS Repo GDrive MIT Yes English Paper GH Pages
GPT-SoVITS Repo Hub MIT Yes Multilingual Not Available Not Available
HierSpeech++ Repo GDrive MIT No KR + EN Paper πŸ€— Space
IMS-Toucan Repo GH release Apache 2.0 Yes ALL* Paper πŸ€— Space, πŸ€— Space*
MahaTTS Repo Hub Apache 2.0 No English + Indic Not Available Recordings, Colab
MaskGCT (Amphion) Repo Hub CC-BY-NC 4.0 No Multilingual Paper πŸ€— Space
Matcha-TTS Repo GDrive MIT Yes English Paper πŸ€— Space GPL-licensed phonemizer
MeloTTS Repo Hub MIT Yes Multilingual Not Available πŸ€— Space
MetaVoice-1B Repo Hub Apache 2.0 Yes Multilingual Not Available πŸ€— Space
Neural-HMM TTS Repo GitHub MIT Yes English Paper GH Pages
OpenVoice Repo Hub MIT No Multilingual Paper πŸ€— Space
OverFlow TTS Repo GitHub MIT Yes English Paper GH Pages
Parler TTS Repo Hub Apache 2.0 Yes English Not Available πŸ€— Space
pflowTTS Unofficial Repo GDrive MIT Yes English Paper Not Available GPL-licensed phonemizer
Pheme Repo Hub CC-BY Yes English Paper πŸ€— Space
Piper Repo Hub MIT Yes Multilingual Not Available πŸ€— Space GPL-licensed phonemizer
RAD-MMM Repo GDrive MIT Yes Multilingual Paper Jupyter Notebook, Webpage
RAD-TTS Repo GDrive MIT Yes English Paper GH Pages
Silero Repo GH links CC BY-NC-SA No Multilingual Not Available Not Available Non Commercial
StyleTTS 2 Repo Hub MIT Yes English Paper πŸ€— Space GPL-licensed phonemizer
Tacotron 2 Unofficial Repo GDrive BSD-3 Yes English Paper Webpage
TorToiSe TTS Repo Hub Apache 2.0 Yes English Technical report πŸ€— Space
TTTS Repo Hub MPL 2.0 No Multilingual Not Available Colab, πŸ€— Space
VALL-E Unofficial Repo Not Available MIT Yes NA Paper Not Available
VITS/ MMS-TTS Repo Hub / MMS Apache 2.0 Yes English Paper πŸ€— Space GPL-licensed phonemizer
WhisperSpeech Repo Hub MIT No Multilingual Not Available πŸ€— Space, Recordings, Colab
XTTS Repo Hub CPML Yes Multilingual Paper πŸ€— Space Non Commercial
xVASynth Repo Hub GPL-3.0 Yes Multilingual Not Available πŸ€— Space Base model trained on non-permissive datasets
  • Multilingual - Amount of supported languages is ever changing, check the Space and Hub which specific languages are supported
  • ALL - Supports all natural languages; may not support artificial/contructed languages

Also to find a model for a specific language, filter out the TTS models hosted on HuggingFace: https://huggingface.co./models?pipeline_tag=text-to-speech&language=en&sort=trending


Legend

For the #above TTS capability table. Open the viewer in another window or even another monitor to keep both it and the legend in view.

  • Processor ⚑ - Inference done by
    • CPU (CPUs = multithreaded) - All models can be run on CPU, so real-time factor should be below 2.0 to qualify for CPU tag, though some more leeway can be given if it supports audio streaming
    • CUDA by NVIDIAβ„’
    • ROCm by AMDβ„’, also see ONNX Runtime HF guide
  • Phonetic alphabet πŸ”€ - Phonetic transcription that allows to control pronunciation of words before inference
    • IPA - International Phonetic Alphabet
    • ARPAbet - American English focused phonetics
  • Insta-clone πŸ‘₯ - Zero-shot model for quick voice cloning
  • Emotion control 🎭 - Able to force an emotional state of speaker
    • 🎭 <# emotions> ( 😑 anger; πŸ˜ƒ happiness; 😭 sadness; 😯 surprise; 🀫 whispering; 😊 friendlyness )
    • 🎭πŸ‘₯ strict insta-clone switch - cloned on sample with specific emotion; may sound different than normal speaking voice; no ability to go in-between states
    • πŸŽ­πŸ“– strict control through prompt - prompt input parameter
  • Prompting πŸ“– - Also a side effect of narrator based datasets and a way to affect the emotional state
    • πŸ“– - Prompt as a separate input parameter
    • πŸ—£πŸ“– - The prompt itself is also spoken by TTS; ElevenLabs docs
  • Streaming support 🌊 - Can playback audio while it is still being generated
  • Speech control 🎚 - Ability to change the pitch, duration, etc. for the whole and/or per-phoneme of the generated speech
  • Voice conversion / Speech-To-Speech 🦜 - Streaming support implies real-time S2S; S2T=>T2S does not count
  • Longform synthesis πŸ“œ - Able to synthesize whole paragraphs, as some TTS models tend to break down after a certain audio length limit

Example if the proprietary ElevenLabs were to be added to the capabilities table:

Name Processor
⚑
Phonetic alphabet
πŸ”€
Insta-clone
πŸ‘₯
Emotional control
🎭
Prompting
πŸ“–
Speech control
🎚
Streaming support
🌊
Voice conversion
🦜
Longform synthesis
πŸ“œ
ElevenLabs CUDA IPA, ARPAbet πŸ‘₯ πŸŽ­πŸ“– πŸ—£πŸ“– 🎚 stability, voice similarity 🌊 🦜 πŸ“œ Projects

More info on how the capabilities table came about can be found within the GitHub Issue.

train_data Legend

Legend for the separate TTSDS Datasets (train_data viewer GitHub)

  • 🌐 Multilingual
    • The ISO codes of languages the model is capable off. ❌ if English only.
  • πŸ“š Training Amount (k hours)
    • The number of hours the model was trained on
  • 🧠 Num. Parameters (M)
    • How many parameters the model has, excluding vocoder and text-only components
  • 🎯 Target Repr.
    • Which output representations the model uses, for example audio codecs or mel spectrograms
  • πŸ“– LibriVox Only
    • If the model was trained on librivox-like (audiobook) data alone
  • πŸ”„ NAR
    • If the model has a significant non-autoregressive component
  • πŸ” AR
    • If the model has a significant autoregressive component
  • πŸ”‘ G2P
    • If the model uses G2P (phone inputs)
  • 🧩 Language Model
    • If an LM-like approach is used (next token prediction)
  • 🎡 Prosody Prediction
    • If prosodic correlates such as pitch or energy are predicted
  • 🌊 Diffusion
    • If diffusion is used (outside vocoder)
  • ⏱️ Delay Pattern

Please create pull requests to update the info on the models!


Downloads last month
1,975

Collection including Pendrokar/open_tts_tracker