Update README.md
Browse files
README.md
CHANGED
@@ -15,7 +15,7 @@ OPT was first introduced in [Open Pre-trained Transformer Language Models](https
|
|
15 |
|
16 |
This model is [facebook/opt-6.7b](https://hf.co/facebook/opt-6.7b) finetuned with low-rank adapters (https://arxiv.org/abs/2106.09685) on the FLAN datasets (https://arxiv.org/pdf/2210.11416.pdf).
|
17 |
|
18 |
-
Low-rank adapters (r=
|
19 |
|
20 |
The model reaches a train ppl of 4.36 and an eval ppl of 4.32.
|
21 |
|
|
|
15 |
|
16 |
This model is [facebook/opt-6.7b](https://hf.co/facebook/opt-6.7b) finetuned with low-rank adapters (https://arxiv.org/abs/2106.09685) on the FLAN datasets (https://arxiv.org/pdf/2210.11416.pdf).
|
17 |
|
18 |
+
Low-rank adapters (r=16) finetuned over 1.6m new tokens of a FLAN task mixture, with the start of each example cut off if it was too large to fit within a 256 token context.
|
19 |
|
20 |
The model reaches a train ppl of 4.36 and an eval ppl of 4.32.
|
21 |
|