|
--- |
|
license: llama3 |
|
language: |
|
- en |
|
- zh |
|
--- |
|
|
|
# Llama3-8B-Chinese-Chat-ExPO |
|
|
|
The extrapolated (ExPO) model based on [`shenzhi-wang/Llama3-70B-Chinese-Chat`](https://huggingface.co./shenzhi-wang/Llama3-8B-Chinese-Chat) and [`meta-llama/Meta-Llama-3-70B-Instruct`](https://huggingface.co./meta-llama/Meta-Llama-3-8B-Instruct), as in the "[Weak-to-Strong Extrapolation Expedites Alignment](https://arxiv.org/abs/2404.16792)" paper. |
|
|
|
Specifically, we obtain this model by extrapolating **(alpha = 0.3)** from the weights of the SFT and DPO/RLHF checkpoints, achieving superior alignment with human preference. |