Update README.md
Browse files
README.md
CHANGED
@@ -1,8 +1,7 @@
|
|
1 |
---
|
2 |
license: apache-2.0
|
3 |
---
|
4 |
-
# X-LoRA
|
5 |
-
Mixture of LoRA Experts: Leverage the power of fine-tuned LoRA experts by employing a mixture of experts, or MoE technique.
|
6 |
|
7 |
X-LoRA works by learning scaling values for LoRA adapters. These learned scalings values are used to
|
8 |
gate the LoRA experts in a dense fashion. Additionally, all LoRA adapters and the base model are frozen, allowing efficient fine tuning due to a low parameter count.
|
@@ -154,12 +153,16 @@ output_text=generate_response (model, tokenizer, text_input=txt,eos_token_id=eos
|
|
154 |
print (output_text[0])
|
155 |
```
|
156 |
|
|
|
|
|
|
|
|
|
157 |
## Original paper and citation
|
158 |
|
159 |
Cite this work as:
|
160 |
```bibtex
|
161 |
@article{NiBuehler_2024,
|
162 |
-
title = {X-LoRA: Mixture of Low-Rank Adapter Experts, a Flexible
|
163 |
author = {E.L. Buehler, M.J. Buehler},
|
164 |
journal = {},
|
165 |
year = {2024},
|
|
|
1 |
---
|
2 |
license: apache-2.0
|
3 |
---
|
4 |
+
# X-LoRA: Mixture of Low-Rank Adapter Experts, a Flexible Framework for Large Language Models
|
|
|
5 |
|
6 |
X-LoRA works by learning scaling values for LoRA adapters. These learned scalings values are used to
|
7 |
gate the LoRA experts in a dense fashion. Additionally, all LoRA adapters and the base model are frozen, allowing efficient fine tuning due to a low parameter count.
|
|
|
153 |
print (output_text[0])
|
154 |
```
|
155 |
|
156 |
+
## Acknowledgements
|
157 |
+
|
158 |
+
This work is built on the Hugging Face [PEFT library](https://github.com/huggingface/peft/tree/main/src/peft) and other components in the Hugging Face ecosystem.
|
159 |
+
|
160 |
## Original paper and citation
|
161 |
|
162 |
Cite this work as:
|
163 |
```bibtex
|
164 |
@article{NiBuehler_2024,
|
165 |
+
title = {X-LoRA: Mixture of Low-Rank Adapter Experts, a Flexible Framework for Large Language Models with Applications in Protein Mechanics and Design},
|
166 |
author = {E.L. Buehler, M.J. Buehler},
|
167 |
journal = {},
|
168 |
year = {2024},
|