Update README.md
Browse files
README.md
CHANGED
@@ -1,3 +1,10 @@
|
|
1 |
-
---
|
2 |
-
license: llama3
|
3 |
-
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
license: llama3
|
3 |
+
---
|
4 |
+
|
5 |
+
Prompt format is the same as Llama 3: https://llama.meta.com/docs/model-cards-and-prompt-formats/meta-llama-3/
|
6 |
+
Standard context length of 8192
|
7 |
+
|
8 |
+
This model was trained on 100MB of long form stories for 8 epochs. This model was designed to do two tasks, continue a story given a summary of the previous events, and write 3k-8k length stories from a single prompt.
|
9 |
+
|
10 |
+
The dataset was constructed from cleaned long form dialogue, restructured, and then summarized with Llama-70B, and temporally stacked so that the summary of the past dialogue begins the next dialogue. Almost all samples were between 7500-8192 tokens long.
|