--- library_name: transformers datasets: - codeparrot/apps - BAAI/TACO - AI-MO/NuminaMath-CoT language: - en base_model: - Qwen/Qwen2.5-32B-Instruct --- ## Model Details ### Model Description This is a 32B reasoning model trained from Qwen2.5-32B-Instruct with 17K data. The performance is on par with o1-preview model on both math and coding. Please see our [blog post](https://novasky-ai.github.io/posts/sky-t1/) for more details. - **Developed by:** NovaSky Team from Sky Computing Lab at UC Berkeley. ## Training Details ### Training Data 17K verified correct responses from Qwen/QwQ-32B-Preview on coding, math. In addition, we add the science portion from the [Still-2 paper](https://arxiv.org/pdf/2412.09413). ### Training Procedure We perform supervised fine tuning on the data, with a batch size of 96. #### Speeds We use Llama-Factory for training. On 8 H100, the training takes 19 hours with DeepSpeed Zero-3 Offload. ## Evaluation | Model | Math500 | AIME2024 | LiveCodeBench-Easy | LiveCodeBench-Medium | LiveCodeBench-Hard | GPQA-Diamond | |------------------------|---------|----------|---------------------|----------------------|--------------------|--------------| | Qwen-2.5-3 2B-Instruct | 85.2 | 16.7 | 82.4 | 40.0 | 8.9 | 42.9 | | Sky-T1 | 88.6 | 43.3 | 87.9 | 54.4 | 17.1 | 53.5 | | QwQ | 90.6 | 50.0 | 88.7 | 57.3 | 17.9 | 56.6 | | o1-preview | 85.5 | 46.6 | 92.0 | 56.6 | 13.8 | 73.3 | ## Acknowledgement We would like to thanks the compute resources from [Lambda Lab](https://lambdalabs.com/service/gpu-cloud?srsltid=AfmBOop5FnmEFTkavVtdZDsLWvHWNg6peXtat-OXJ9MW5GMNsk756PE5) and [AnyScale](https://www.anyscale.com/). We would like to thanks the academic feedback and support from the [Still-2 Team](https://arxiv.org/pdf/2412.09413), and [Junyang Lin](https://justinlin610.github.io/) from the [Qwen Team](https://qwenlm.github.io/). ## Citation Please considering citing our blog post if you found it useful for your research. Thank you! ```bibtex @misc{sky_t1_2025, author = {NovaSky Team}, title = {Sky-T1: Fully open-source reasoning model with o1-preview performance in $450 budget}, howpublished = {https://novasky-ai.github.io/posts/sky-t1}, note = {Accessed: 2025-01-09}, year = {2025} }