Papers
arxiv:2412.08347

SmolTulu: Higher Learning Rate to Batch Size Ratios Can Lead to Better Reasoning in SLMs

Published on Dec 11
· Submitted by SultanR on Dec 16
Authors:

Abstract

We present SmolTulu-1.7b-Instruct, referenced in this report as SmolTulu-DPO-1130, an instruction-tuned language model that adapts AllenAI's Tulu 3 post-training pipeline to enhance Huggingface's SmolLM2-1.7B base model. Through comprehensive empirical analysis using a 135M parameter model, we demonstrate that the relationship between learning rate and batch size significantly impacts model performance in a task-dependent manner. Our findings reveal a clear split: reasoning tasks like ARC and GSM8K benefit from higher learning rate to batch size ratios, while pattern recognition tasks such as HellaSwag and IFEval show optimal performance with lower ratios. These insights informed the development of SmolTulu, which achieves state-of-the-art performance among sub-2B parameter models on instruction following, scoring 67.7% on IFEval (Delta11%), and mathematical reasoning with 51.6% on GSM8K (Delta3.4%), with an alternate version achieving scoring 57.1% on ARC (Delta5.4%). We release our model, training recipes, and ablation studies to facilitate further research in efficient model alignment, demonstrating that careful adaptation of optimization dynamics can help bridge the capability gap between small and large language models.

Community

Paper author Paper submitter
•
edited 10 days ago

SmolTulu: Higher Learning Rate to Batch Size Ratios Can Lead to Better Reasoning in SLMs

Discussion

Hey everyone! I'm excited to share my work on SmolTulu, where I explored how optimization dynamics play a surprisingly important role in small language models' reasoning performance. I found that higher learning rate to batch size ratios significantly improved reasoning capabilities in the 1.7B parameter model, helping it achieve state-of-the-art results on GSM8K (51.6%) and IFEval (67.7%, first on OpenLLMLeaderboard) among sub-2B models. This challenges the conventional wisdom of linearly scaling learning rates with batch size, suggesting smaller models may benefit from fundamentally different optimization strategies than their larger counterparts. I've open-sourced the model and training recipes to help advance research in efficient model alignment. I'm particularly curious to hear the community's thoughts on whether these findings might generalize to more things or if others have observed similar dynamics in their work with smaller models or even if I made some fatal flaws! I wanted to estimate the Hessian to see if the model is in a flat minima (associated with generalization) or a sharp one, but didn't have the compute for it sadly. Looking forward to the discussion :)

This is an automated message from the Librarian Bot. I found the following papers similar to this paper.

The following papers were recommended by the Semantic Scholar API

Please give a thumbs up to this comment if you found it helpful!

If you want recommendations for any Paper on Hugging Face checkout this Space

You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: @librarian-bot recommend

Sign up or log in to comment

Models citing this paper 3

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2412.08347 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2412.08347 in a Space README.md to link it from this page.

Collections including this paper 3