-
Slamming: Training a Speech Language Model on One GPU in a Day
Paper • 2502.15814 • Published • 66 -
Small Models Struggle to Learn from Strong Reasoners
Paper • 2502.12143 • Published • 28 -
HeadInfer: Memory-Efficient LLM Inference by Head-wise Offloading
Paper • 2502.12574 • Published • 11 -
Large Language Diffusion Models
Paper • 2502.09992 • Published • 99
Collections
Discover the best community collections!
Collections including paper arxiv:2502.15814
-
Adding NVMe SSDs to Enable and Accelerate 100B Model Fine-tuning on a Single GPU
Paper • 2403.06504 • Published • 53 -
Can 1B LLM Surpass 405B LLM? Rethinking Compute-Optimal Test-Time Scaling
Paper • 2502.06703 • Published • 142 -
Slamming: Training a Speech Language Model on One GPU in a Day
Paper • 2502.15814 • Published • 66
-
How to Synthesize Text Data without Model Collapse?
Paper • 2412.14689 • Published • 51 -
SepLLM: Accelerate Large Language Models by Compressing One Segment into One Separator
Paper • 2412.12094 • Published • 10 -
StyleTTS 2: Towards Human-Level Text-to-Speech through Style Diffusion and Adversarial Training with Large Speech Language Models
Paper • 2306.07691 • Published • 8 -
iSTFTNet: Fast and Lightweight Mel-Spectrogram Vocoder Incorporating Inverse Short-Time Fourier Transform
Paper • 2203.02395 • Published