SIFT-50M: A Large-Scale Multilingual Dataset for Speech Instruction Fine-Tuning
Abstract
We introduce SIFT (Speech Instruction Fine-Tuning), a 50M-example dataset designed for instruction fine-tuning and pre-training of speech-text large language models (LLMs). SIFT-50M is built from publicly available speech corpora, which collectively contain 14K hours of speech, and leverages LLMs along with off-the-shelf expert models. The dataset spans five languages, encompassing a diverse range of speech understanding as well as controllable speech generation instructions. Using SIFT-50M, we train SIFT-LLM, which outperforms existing speech-text LLMs on instruction-following benchmarks while achieving competitive performance on foundational speech tasks. To support further research, we also introduce EvalSIFT, a benchmark dataset specifically designed to evaluate the instruction-following capabilities of speech-text LLMs.
Community
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- InSerter: Speech Instruction Following with Unsupervised Interleaved Pre-training (2025)
- Balancing Speech Understanding and Generation Using Continual Pre-training for Codec-based Speech LLM (2025)
- Audio-FLAN: A Preliminary Release (2025)
- Solla: Towards a Speech-Oriented LLM That Hears Acoustic Context (2025)
- SVLA: A Unified Speech-Vision-Language Assistant with Multimodal Reasoning and Speech Generation (2025)
- LLMVoX: Autoregressive Streaming Text-to-Speech Model for Any LLM (2025)
- Spark-TTS: An Efficient LLM-Based Text-to-Speech Model with Single-Stream Decoupled Speech Tokens (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 1
Spaces citing this paper 0
No Space linking this paper