Starling-LM-7B-beta-GGUF
- Model creator: Nexusflow
- Original model: Starling-LM-7B-beta
Description
This repo contains GGUF format model files for Starling-LM-7B-beta
Model Summary
- Developed by: The Nexusflow Team ( Banghua Zhu * , Evan Frick * , Tianhao Wu * , Hanlin Zhu, Karthik Ganesan, Wei-Lin Chiang, Jian Zhang, and Jiantao Jiao).
- Model type: Language Model finetuned with RLHF / RLAIF
- License: Apache-2.0 license under the condition that the model is not used to compete with OpenAI
- Finetuned from model: Openchat-3.5-0106 (based on Mistral-7B-v0.1)
We introduce Starling-LM-7B-beta, an open large language model (LLM) trained by Reinforcement Learning from AI Feedback (RLAIF). Starling-LM-7B-beta is trained from Openchat-3.5-0106 with our new reward model Nexusflow/Starling-RM-34B and policy optimization method Fine-Tuning Language Models from Human Preferences (PPO). Harnessing the power of the ranking dataset, berkeley-nest/Nectar, the upgraded reward model, Starling-RM-34B, and the new reward training and policy tuning pipeline, Starling-LM-7B-beta scores an improved 8.12 in MT Bench with GPT-4 as a judge.
Citation
@misc{starling2023,
title = {Starling-7B: Improving LLM Helpfulness & Harmlessness with RLAIF},
url = {},
author = {Zhu, Banghua and Frick, Evan and Wu, Tianhao and Zhu, Hanlin and Ganesan, Karthik and Chiang, Wei-Lin and Zhang, Jian and Jiao, Jiantao},
month = {November},
year = {2023}
}
- Downloads last month
- 496
Model tree for QuantFactory/Starling-LM-7B-beta-GGUF
Base model
Nexusflow/Starling-LM-7B-beta