Text Generation
Transformers
PyTorch
TensorBoard
English
olmo
conversational
Inference Endpoints
hamishivi commited on
Commit
152b0d7
1 Parent(s): f3ad50b

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +35 -0
README.md ADDED
@@ -0,0 +1,35 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ datasets:
4
+ - allenai/dolma
5
+ - allenai/tulu-v2-sft-mixture-olmo-4096
6
+ - argilla/ultrafeedback-binarized-preferences-cleaned
7
+ language:
8
+ - en
9
+ ---
10
+ # OLMo-1B-0724 Instruct
11
+
12
+ This is a version of [OLMo-1B-0724-hf](https://huggingface.co/allenai/OLMo-1B-0724-hf) that has undergone SFT and DPO training.
13
+ See [the SFT model card for details on SFT training](https://huggingface.co/hamishivi/OLMo-1B-0724-SFT-hf).
14
+
15
+ This model is initialised from [OLMo-1B-0724-SFT-hf](https://huggingface.co/hamishivi/OLMo-1B-0724-SFT-hf), and then DPO trained on a cleaned ultrafeedback dataset
16
+ for 3 epochs with a batch size of 32, beta of 0.1, linear warmup for 10% of training, and then linear cooldown.
17
+
18
+ Evals are as follows:
19
+
20
+ | Metric | [OLMo-1B-0724-hf](https://huggingface.co/allenai/OLMo-1B-0724-hf) | [OLMo-1B-0724-SFT-hf](https://huggingface.co/hamishivi/OLMo-1B-0724-SFT-hf) | **[OLMo-1B-0724-Instruct-hf](https://huggingface.co/hamishivi/OLMo-1B-0724-Instruct-hf) (this model!)** |
21
+ |---------------------------|-----------------|---------------------|-------------------------|
22
+ | MMLU 0-shot | 25.0 | 36.0 | **36.7** |
23
+ | GSM8k CoT 8-shot | 7.0 | **12.5** | **12.5** |
24
+ | BBH CoT 3-shot | 22.5 | 27.2 | **30.6** |
25
+ | HumanEval P@10 | 16.0 | 21.2 | **22.0** |
26
+ | AlpacaEval 1 | - | 41.5 | **50.9** |
27
+ | AlpacaEval 2 LC | - | **2.7** | 2.5 |
28
+ | Toxigen % Toxic | 80.3 | 59.7 | **14.1** |
29
+ | TruthfulQA %Info+True | 23.0 | 40.9 | **42.2** |
30
+ | IFEval Loose Acc | 20.5 | **26.1** | 24.2 |
31
+ | XSTest F1 | 67.6 | **81.9** | 79.8 |
32
+ | **Average of above metrics** | 25.2 | 33.0 | **38.7** |
33
+
34
+
35
+ Model training and evaluation was performed using [Open-instruct](https://github.com/allenai/open-instruct), so check that out for more details on evaluation.