--- license: apache-2.0 datasets: - databricks/databricks-dolly-15k language: - en metrics: - rouge base_model: - openai-community/gpt2 pipeline_tag: text-generation --- # SeqKD-gpt2-120M [paper](https://arxiv.org/abs/2306.08543) | [code](https://github.com/microsoft/LMOps/tree/main/minillm) **SeqKD-gpt2-120M** is a gpt2-base (120M) model distilled from [gpt2-xlarge (1.5B)](https://huggingface.co./MiniLLM/teacher-gpt2-1.5B) on [databricks-dolly-15k](https://huggingface.co./datasets/aisquared/databricks-dolly-15k) with sequence-level forward KLD. It is used as a baseline for [MiniLLM](https://huggingface.co./MiniLLM/MiniLLM-gpt2-120M). ## Other Baselines + [SFT w/o KD](https://huggingface.co./MiniLLM/SFT-gpt2-120M) + [KD](https://huggingface.co./MiniLLM/KD-gpt2-120M) ## Citation ``` @inproceedings{minillm, title={MiniLLM: Knowledge Distillation of Large Language Models}, author={Gu, Yuxian and Dong, Li and Wei, Furu and Huang, Minlie}, booktitle={Proceedings of ICLR}, year={2024} } ```