Edit model card
DCLM Logo

Check out our more recent, higher performing model here! https://huggingface.co./TRI-ML/DCLM-1B/

Model Card for DCLM-1B-v0

DCLM-1B-v0 is a 1.4 billion parameter language model trained on the DCLM-Baseline dataset, which was curated as part of the DataComp for Language Models (DCLM) benchmark. This model is designed to showcase the effectiveness of systematic data curation techniques for improving language model performance.

Model Details

Size Training Tokens Layers Hidden Size Attention Heads Context Length
1.4B 2.6T 24 2048 16 2048

Model Description

  • Developed by: DataComp for Language Models (DCLM) Team
  • Model type: Decoder-only Transformer language model
  • Language(s): English (primarily)
  • License: Apache 2.0
  • Contact: [email protected]
  • Date: July 2024

Model Sources

Quickstart

First install open_lm

pip install git+https://github.com/mlfoundations/open_lm.git

Then you can load the model using HF's Auto classes as follows:

from open_lm.hf import *
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("TRI-ML/DCLM-1B-v0")
model = AutoModelForCausalLM.from_pretrained("TRI-ML/DCLM-1B-v0")

inputs = tokenizer(["Machine learning is"], return_tensors="pt")
gen_kwargs = {"max_new_tokens": 50, "top_p": 0.8, "temperature": 0.8, "do_sample": True, "repetition_penalty": 1.1}
output = model.generate(inputs['input_ids'], **gen_kwargs)
output = tokenizer.decode(output[0].tolist(), skip_special_tokens=True)
print(output)

Training Details

The model was trained using the following setup:

  • Architecture: Decoder-only Transformer
  • Framework: PyTorch with OpenLM
  • Optimizer: AdamW
  • Learning Rate: 1e-2 (peak)
  • Weight Decay: 1e-2
  • Batch Size: 2048 sequences
  • Sequence Length: 2048 tokens
  • Total Training Tokens: 2.6T
  • Hardware: Trained on H100 GPUs

We train our 1.4B model for 2.6T tokens on DCLM-Baseline. Similar to the 7B model training recipe described in Appendix P of our paper, we train for 2.3T tokens on DCLM-baseline combined with the StarCoder and ProofPile2 datasets, with the hyper-parameters described above. Note that we use a schedule set for the full dataset, and stop training early at 2.3T tokens. Then, we cool down the model on the same dataset to the cooldown LR over 200B tokens. We will update our paper soon with more training details.

Evaluation

Here are the evaluation results for DCLM-1B on various tasks (using llm-foundry eval suite)

Task Score
AGI Eval LSAT AR 0.2348
AGI Eval LSAT LR 0.3098
AGI Eval LSAT RC 0.3321
AGI Eval SAT English 0.3883
AGI Eval SAT Math (CoT) 0.0182
AQuA (CoT) 0.0245
ARC (challenge) 0.4343
ARC (easy) 0.7290
BBQ 0.4670
BigBench Conceptual Combinations 0.4660
BigBench Conlang Translation 0.0732
BigBench CS Algorithms 0.4515
BigBench Dyck Languages 0.1990
BigBench Elementary Math QA 0.2558
BigBench Language Identification 0.2911
BigBench Logical Deduction 0.2480
BigBench Misconceptions 0.5068
BigBench Novel Concepts 0.5312
BigBench Operators 0.2714
BigBench QA Wikidata 0.6687
BigBench Repeat Copy Logic 0.1562
BigBench Strange Stories 0.6839
BigBench Strategy QA 0.5762
BigBench Understanding Fables 0.4127
BoolQ 0.7131
CommonSenseQA 0.6110
COPA 0.7900
CoQA 0.4257
Enterprise PII Classification 0.5110
GPQA Diamond 0.2121
GPQA 0.2344
GSM8K (CoT) 0.0371
HellaSwag 0.7087
HellaSwag (zero-shot) 0.7001
Jeopardy 0.4218
LAMBADA (OpenAI) 0.6938
LogiQA 0.3026
MathQA 0.2598
MMLU (few-shot) 0.4193
MMLU (zero-shot) 0.3543
OpenBookQA 0.4380
PIQA 0.7786
PubMedQA (labeled) 0.2560
Simple Arithmetic (no spaces) 0.0280
Simple Arithmetic (with spaces) 0.0300
SIQA 0.6735
SQuAD 0.5424
SVAMP (CoT) 0.1800
TriviaQA (small subset) 0.3603
Winogender (MC female) 0.4833
Winogender (MC male) 0.5000
Winograd 0.8352
Winogrande 0.6527

Note: All scores are presented as decimal values between 0 and 1, representing the proportion of correct answers or the model's performance on each task.

Below we compare to the recently released SmolLM (https://huggingface.co./blog/smollm) on key benchmarks. As described in the paper, Core accuracy is the average of centered accuracy on 22 tasks (including HellaSwag and ARC-E), Extended is centered accuracy averaged over 53 tasks. We evaluate the models using llm-foundry.

Task Core Extended MMLU 5-shot
DCLM-1B 42.3 25.1 41.9
SmolLM 36.3 21.2 30.0

Limitations and Biases

While DCLM-1B demonstrates strong performance across a range of tasks, it's important to note:

  1. The model may exhibit biases present in its training data, which is derived from web crawl data.
  2. It has not undergone specific alignment or safety fine-tuning, so outputs should be used with caution.
  3. Performance on tasks not included in the evaluation suite may vary.
  4. The model's knowledge is limited to its training data cutoff date.

Ethical Considerations

Users should be aware that this model, like all large language models, can potentially generate harmful or biased content. It should not be used for making decisions about individuals or in sensitive applications without appropriate safeguards and human oversight.

Citation

If you use this model in your research, please cite:

@article{Li2024DataCompLM,
  title={DataComp-LM: In search of the next generation of training sets for language models},
  author={Jeffrey Li and Alex Fang and Georgios Smyrnis and Maor Ivgi and Matt Jordan and Samir Gadre and Hritik Bansal and Etash Guha and Sedrick Keh and Kushal Arora and [... full author list]},
  journal={arXiv preprint arXiv:2406.11794},
  year={2024}
}
Downloads last month
26
Safetensors
Model size
1.44B params
Tensor type
F32
·
Inference API
Unable to determine this model’s pipeline type. Check the docs .