Text Generation
Transformers
PyTorch
English
mistral
text-generation-inference
Inference Endpoints
Edit model card

PDS-1B

paper | code

PDS-1B is a 1B model with Mistral achitecture pre-trained from scratch on the data selected from the CC split of Redpajama, using the PDS framework.

The PDS framework is based on the Pontryagin's maximum principle for optimal pre-training data selection, which not only enjoy strong theoretical support but is also scalable for training large language models.

Please refer to our paper for more details.

Overview of the theory:

Overview of the PDS framework:

Evaluation

PDS-selected data improves the performance of language models pre-trained from scratch and saves pre-training comptation. The improvement scales up to large model sizes.

Baseline

Conventional Pre-training

Citation

@article{gu2024data,
  title={Data Selection via Optimal Control for Language Models},
  author={Gu, Yuxian and Dong, Li and Wang, Hongning and Hao, Yaru and Dong, Qingxiu and Wei, Furu and Huang, Minlie},
  journal={arXiv preprint arXiv:2410.07064},
  year={2024}
}
Downloads last month
15
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Dataset used to train Data-Selection/PDS-1B

Collection including Data-Selection/PDS-1B