Update README.md
Browse files
README.md
CHANGED
@@ -2,13 +2,13 @@ This is a model trained in four stages (Use with Llama-8B-Instruct or Llama-8B-I
|
|
2 |
|
3 |
|
4 |
|
5 |
-
Base Model -- 1 Gig of semi-structured pretraining data
|
6 |
![image/png](https://cdn-uploads.huggingface.co/production/uploads/637f3b03932a61b89aefbf5c/hpdbVRrM1yt65-gNtRIfT.png)
|
7 |
- Base pretraining phase 1 (Constant LR, text completion -- 20,000 steps 2/3 epoch)
|
8 |
- Base pretraining phase 2 (Cosine LR, text completion -- 10,000 steps 1/3 epoch)
|
9 |
|
10 |
|
11 |
-
Merge LORA into instruct model -- 100 MB of structured story-instruct data
|
12 |
![image/png](https://cdn-uploads.huggingface.co/production/uploads/637f3b03932a61b89aefbf5c/V1Jf07k8JdI0_OzIDc7FF.png)
|
13 |
- Story-instruct tune phase 1 (Constant LR, ~1250 steps, 1 epoch)
|
14 |
- Story-instruct tune phase 2 (Cosine LR, ~1250 steps, 1 epoch)
|
|
|
2 |
|
3 |
|
4 |
|
5 |
+
Base Model -- 1 Gig of semi-structured pretraining data (Uniform distribution centered around 4096 ctx length, b/w 512-8192)
|
6 |
![image/png](https://cdn-uploads.huggingface.co/production/uploads/637f3b03932a61b89aefbf5c/hpdbVRrM1yt65-gNtRIfT.png)
|
7 |
- Base pretraining phase 1 (Constant LR, text completion -- 20,000 steps 2/3 epoch)
|
8 |
- Base pretraining phase 2 (Cosine LR, text completion -- 10,000 steps 1/3 epoch)
|
9 |
|
10 |
|
11 |
+
Merge LORA into instruct model -- 100 MB of structured story-instruct data (All samples attempt to be near 8192 ctx fullsize instructions)
|
12 |
![image/png](https://cdn-uploads.huggingface.co/production/uploads/637f3b03932a61b89aefbf5c/V1Jf07k8JdI0_OzIDc7FF.png)
|
13 |
- Story-instruct tune phase 1 (Constant LR, ~1250 steps, 1 epoch)
|
14 |
- Story-instruct tune phase 2 (Cosine LR, ~1250 steps, 1 epoch)
|