Update README.md
Browse files
README.md
CHANGED
@@ -7,4 +7,10 @@ license_name: yi-license
|
|
7 |
license_link: https://huggingface.co/01-ai/Yi-34B-200K/blob/main/LICENSE
|
8 |
---
|
9 |
|
10 |
-
this is a DPO fine-tuned MoE model with 60B parameter.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
7 |
license_link: https://huggingface.co/01-ai/Yi-34B-200K/blob/main/LICENSE
|
8 |
---
|
9 |
|
10 |
+
this is a DPO fine-tuned MoE model with 60B parameter.
|
11 |
+
|
12 |
+
* [DPO Trainer](https://huggingface.co/docs/trl/main/en/dpo_trainer) with dataset jondurbin/truthy-dpo-v0.1 to improve [TomGrc/FusionNet_7Bx2_MoE_14B]
|
13 |
+
```
|
14 |
+
DPO Trainer
|
15 |
+
TRL supports the DPO Trainer for training language models from preference data, as described in the paper Direct Preference Optimization: Your Language Model is Secretly a Reward Model by Rafailov et al., 2023.
|
16 |
+
```
|