π AlphaMonarch-7B
tl;dr: AlphaMonarch-7B is a new DPO merge that retains all the reasoning abilities of the very best merges and significantly improves its conversational abilities. Kind of the best of both worlds in a 7B model. π
AlphaMonarch-7B is a DPO fine-tuned of mlabonne/NeuralMonarch-7B using the argilla/OpenHermes2.5-dpo-binarized-alpha preference dataset.
It is based on a merge of the following models using LazyMergekit:
Special thanks to Jon Durbin, Intel, Argilla, and Teknium for the preference datasets.
Try the demo: https://huggingface.co./spaces/mlabonne/AlphaMonarch-7B-GGUF-Chat
π Applications
This model uses a context window of 8k. I recommend using it with the Mistral Instruct chat template (works perfectly with LM Studio).
If you use SillyTavern, you might want to tweak the inference parameters. Here's what LM Studio uses as a reference: temp
0.8, top_k
40, top_p
0.95, min_p
0.05, repeat_penalty
1.1.
It is one of the very best 7B models in terms of instructing following and reasoning abilities and can be used for conversations, RP, and storytelling. Note that it tends to have a quite formal and sophisticated style, but it can be changed by modifying the prompt.
β‘ Quantized models
Thanks to LoneStriker for the GPTQ, AWQ, and EXL2 quants.
- GGUF: https://huggingface.co./mlabonne/AlphaMonarch-7B-GGUF
- GPTQ: https://huggingface.co./LoneStriker/AlphaMonarch-7B-GPTQ
- AWQ: https://huggingface.co./LoneStriker/AlphaMonarch-7B-AWQ
- mlx: https://huggingface.co./mlx-community/AlphaMonarch-7B-mlx
- EXL2:
- https://huggingface.co./LoneStriker/AlphaMonarch-7B-3.0bpw-h6-exl2
- https://huggingface.co./LoneStriker/AlphaMonarch-7B-4.0bpw-h6-exl2
- https://huggingface.co./LoneStriker/AlphaMonarch-7B-5.0bpw-h6-exl2
- https://huggingface.co./LoneStriker/AlphaMonarch-7B-6.0bpw-h6-exl2
- https://huggingface.co./LoneStriker/AlphaMonarch-7B-8.0bpw-h6-exl2
π Evaluation
Nous
AlphaMonarch-7B is the best-performing 7B model on Nous' benchmark suite (evaluation performed using LLM AutoEval). See the entire leaderboard here.
Model | Average | AGIEval | GPT4All | TruthfulQA | Bigbench |
---|---|---|---|---|---|
AlphaMonarch-7B π | 62.74 | 45.37 | 77.01 | 78.39 | 50.2 |
NeuralMonarch-7B π | 62.73 | 45.31 | 76.99 | 78.35 | 50.28 |
Monarch-7B π | 62.68 | 45.48 | 77.07 | 78.04 | 50.14 |
teknium/OpenHermes-2.5-Mistral-7B π | 52.42 | 42.75 | 72.99 | 52.99 | 40.94 |
mlabonne/NeuralHermes-2.5-Mistral-7B π | 53.51 | 43.67 | 73.24 | 55.37 | 41.76 |
mlabonne/NeuralBeagle14-7B π | 60.25 | 46.06 | 76.77 | 70.32 | 47.86 |
mlabonne/NeuralOmniBeagle-7B π | 62.3 | 45.85 | 77.26 | 76.06 | 50.03 |
eren23/dpo-binarized-NeuralTrix-7B π | 62.5 | 44.57 | 76.34 | 79.81 | 49.27 |
CultriX/NeuralTrix-7B-dpo π | 62.5 | 44.61 | 76.33 | 79.8 | 49.24 |
EQ-bench
AlphaMonarch-7B is also outperforming 70B and 120B parameter models on EQ-bench by Samuel J. Paech, who kindly ran the evaluations.
MT-Bench
########## First turn ##########
score
model turn
gpt-4 1 8.95625
OmniBeagle-7B 1 8.31250
AlphaMonarch-7B 1 8.23750
claude-v1 1 8.15000
NeuralMonarch-7B 1 8.09375
gpt-3.5-turbo 1 8.07500
claude-instant-v1 1 7.80000
########## Second turn ##########
score
model turn
gpt-4 2 9.025000
claude-instant-v1 2 8.012658
OmniBeagle-7B 2 7.837500
gpt-3.5-turbo 2 7.812500
claude-v1 2 7.650000
AlphaMonarch-7B 2 7.618750
NeuralMonarch-7B 2 7.375000
########## Average ##########
score
model
gpt-4 8.990625
OmniBeagle-7B 8.075000
gpt-3.5-turbo 7.943750
AlphaMonarch-7B 7.928125
claude-instant-v1 7.905660
claude-v1 7.900000
NeuralMonarch-7B 7.734375
NeuralBeagle14-7B 7.628125
Open LLM Leaderboard
AlphaMonarch-7B is one of the best-performing non-merge 7B models on the Open LLM Leaderboard:
π³ Model Family Tree
π» Usage
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "mlabonne/AlphaMonarch-7B"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
- Downloads last month
- 14
Model tree for mlabonne/AlphaMonarch-7B-2bit-HQQ
Evaluation results
- normalized accuracy on AI2 Reasoning Challenge (25-Shot)test set Open LLM Leaderboard73.040
- normalized accuracy on HellaSwag (10-Shot)validation set Open LLM Leaderboard89.180
- accuracy on MMLU (5-Shot)test set Open LLM Leaderboard64.400
- mc2 on TruthfulQA (0-shot)validation set Open LLM Leaderboard77.910
- accuracy on Winogrande (5-shot)validation set Open LLM Leaderboard84.690
- accuracy on GSM8k (5-shot)test set Open LLM Leaderboard66.720