Phoenix_DPO_60B / README.md
cloudyu's picture
Update README.md
bd9ac16 verified
|
raw
history blame
434 Bytes
metadata
license: other
tags:
  - yi
  - moe
license_name: yi-license
license_link: https://huggingface.co./01-ai/Yi-34B-200K/blob/main/LICENSE

this is a DPO fine-tuned MoE model with 60B parameter.

DPO Trainer
TRL supports the DPO Trainer for training language models from preference data, as described in the paper Direct Preference Optimization: Your Language Model is Secretly a Reward Model by Rafailov et al., 2023.