Update README.md
Browse files
README.md
CHANGED
@@ -73,7 +73,10 @@ A small 101M param (total) decoder model. This is the first version of the model
|
|
73 |
|
74 |
**This checkpoint** is the 'raw' pre-trained model and has not been tuned to a more specific task. **It should be fine-tuned** before use in most cases.
|
75 |
|
76 |
-
|
|
|
|
|
|
|
77 |
- For the chat version of this model, please [see here](https://youtu.be/dQw4w9WgXcQ?si=3ePIqrY1dw94KMu4)
|
78 |
|
79 |
---
|
|
|
73 |
|
74 |
**This checkpoint** is the 'raw' pre-trained model and has not been tuned to a more specific task. **It should be fine-tuned** before use in most cases.
|
75 |
|
76 |
+
### Checkpoints & Links
|
77 |
+
|
78 |
+
- _smol_-er 81M parameter checkpoint with in/out embeddings tied: [here](https://huggingface.co/BEE-spoke-data/smol_llama-81M-tied)
|
79 |
+
- Fine-tuned on `pypi` to generate Python code - [link](https://huggingface.co/BEE-spoke-data/smol_llama-101M-GQA-python)
|
80 |
- For the chat version of this model, please [see here](https://youtu.be/dQw4w9WgXcQ?si=3ePIqrY1dw94KMu4)
|
81 |
|
82 |
---
|