This model is aligned using the AlpacaFarm dataset, fine-tuned through an alignment loss. The alignment process started from the Supervised Fine-Tuned (SFT) version of LLaMA 2 7B. The optimization process was conducted with a single epoch. For more information on the dataset, refer to the AlpacaFarm documentation (https://github.com/tatsu-lab/alpaca_farm).
- Downloads last month
- 33
Model tree for sabersaleh/Llama2-7B-aligned
Base model
meta-llama/Llama-2-7b