Update README.md
Browse files
README.md
CHANGED
@@ -155,6 +155,10 @@ output_text=generate_response (model, tokenizer, text_input=txt,eos_token_id=eos
|
|
155 |
print (output_text[0])
|
156 |
```
|
157 |
|
|
|
|
|
|
|
|
|
158 |
## Acknowledgements
|
159 |
|
160 |
This work is built on the Hugging Face [PEFT library](https://github.com/huggingface/peft/tree/main/src/peft) and other components in the Hugging Face ecosystem.
|
|
|
155 |
print (output_text[0])
|
156 |
```
|
157 |
|
158 |
+
## Dataset
|
159 |
+
|
160 |
+
See [lamm-mit/x-lora-dataset](lamm-mit/x-lora-dataset) for the dataset used to train the X-LoRA model. Details on the datasets used to train the original adapters are included in the paper (see reference below).
|
161 |
+
|
162 |
## Acknowledgements
|
163 |
|
164 |
This work is built on the Hugging Face [PEFT library](https://github.com/huggingface/peft/tree/main/src/peft) and other components in the Hugging Face ecosystem.
|