You need to agree to share your contact information to access this dataset

This repository is publicly accessible, but you have to accept the conditions to access its files and content.

Log in or Sign Up to review the conditions and access this dataset content.

MegaMath: Pushing the Limits of Open Math Copora

Megamath is part of TxT360, curated by LLM360 Team.

MegaMath Collection

We introduce MegaMath, an open math pretraining dataset curated from diverse, math-focused sources, with over 300B tokens. MegaMath is curated via the following three efforts:

  • Revisiting web data: We re-extracted mathematical documents from Common Crawl with math-oriented HTML optimizations, fasttext-based filtering and deduplication, all for acquiring higher-quality data on the Internet.
  • Recalling Math-related code data: We identified high quality math-related code from large code training corpus, Stack-V2, further enhancing data diversity.
  • Exploring Synthetic data: We synthesized QA-style text, math-related code, and interleaved text-code blocks from web data or code data.

MegaMath Compared to Existing Datasets

MegaMath is the largest open math pre-training dataset to date, surpassing DeepSeekMath (120B) tokens.

MegaMath Delivers with High Quality

During development, we use extensive experiments to find optimal practice for text extraction, deduplication, fasttext training, etc. Training MegaMath data shows better performance than existing open datasets.

Training MegaMath on Latest LMs

We also release two proof-of-concept models which is based on Llama-3.2-1B and LLama-3.2-3B. Training MegaMath on Llama-3.2-1B and LLama-3.2-3B brings about 15% ~ 20% performance boost on 10 downstream benchmarks, demonstrateing its high data quality.

Detailed Statistics

Category # Sample(M) # Toks(B) Avg. (# Toks)
Web Domain 121.5 279.0 2296.9
Web 106.5 263.9 2478.7
Web-Pro 15.0 15.1 1006.0
Code Domain 13.4 28.1 2102.7
Synthetic Data 80.2 64.5 804.5
Translated Code 7.4 7.2 979.5
Q&A 22.6 7.0 308.3
Text&Code Block 50.2 50.3 1002.1
Total 215.1 371.6 1727.6

Citation

If you use our dataset or find our work useful, please cite

@article{zhou2025megamath,
  title     = {MegaMath: Pushing the Limits of Open Math Corpora},
  author    = {Zhou, Fan and Wang, Zengzhi and Ranjan, Nikhil and Cheng, Zhoujun and Tang, Liping and He, Guowei and Liu, Zhengzhong and Xing, Eric P.},
  journal   = {arXiv preprint arXiv:2504.xxxxx},
  year      = {2025},
  note      = {Preprint}
}
Downloads last month
33

Models trained or fine-tuned on LLM360/MegaMath