image/jpeg

๐Ÿ”ฎ Beyonder-4x7B-v3

Beyonder-4x7B-v3 is an improvement over the popular Beyonder-4x7B-v2. It's a Mixture of Experts (MoE) made with the following models using LazyMergekit:

Special thanks to beowolx for making the best Mistral-based code model and to SanjiWatsuki for creating one of the very best RP models.

Try the demo: https://huggingface.co./spaces/mlabonne/Beyonder-4x7B-v3

๐Ÿ” Applications

This model uses a context window of 8k. I recommend using it with the Mistral Instruct chat template (works perfectly with LM Studio).

If you use SillyTavern, you might want to tweak the inference parameters. Here's what LM Studio uses as a reference: temp 0.8, top_k 40, top_p 0.95, min_p 0.05, repeat_penalty 1.1.

Thanks to its four experts, it's a well-rounded model, capable of achieving most tasks. As two experts are always used to generate an answer, every task benefits from other capabilities, like chat with RP, or math with code.

โšก Quantized models

Thanks bartowski for quantizing this model.

๐Ÿ† Evaluation

This model is not designed to excel in traditional benchmarks, as the code and role-playing models generally do not apply to those contexts. Nonetheless, it performs remarkably well thanks to strong general-purpose experts.

Nous

Beyonder-4x7B-v3 is one of the best models on Nous' benchmark suite (evaluation performed using LLM AutoEval) and significantly outperforms the v2. See the entire leaderboard here.

Model Average AGIEval GPT4All TruthfulQA Bigbench
mlabonne/AlphaMonarch-7B ๐Ÿ“„ 62.74 45.37 77.01 78.39 50.2
mlabonne/Beyonder-4x7B-v3 ๐Ÿ“„ 61.91 45.85 76.67 74.98 50.12
mlabonne/NeuralDaredevil-7B ๐Ÿ“„ 59.39 45.23 76.2 67.61 48.52
SanjiWatsuki/Kunoichi-DPO-v2-7B ๐Ÿ“„ 58.29 44.79 75.05 65.68 47.65
mlabonne/Beyonder-4x7B-v2 ๐Ÿ“„ 57.13 45.29 75.95 60.86 46.4
beowolx/CodeNinja-1.0-OpenChat-7B ๐Ÿ“„ 50.35 39.98 71.77 48.73 40.92

EQ-Bench

Beyonder-4x7B-v3 is the best 4x7B model on the EQ-Bench leaderboard, outperforming older versions of ChatGPT and Llama-2-70b-chat. It is very close to Mixtral-8x7B-Instruct-v0.1 and Gemini Pro. Thanks Sam Paech for running the eval.

image/png

Open LLM Leaderboard

It's also a strong performer on the Open LLM Leaderboard, significantly outperforming the v2 model.

image/png

๐Ÿงฉ Configuration

base_model: mlabonne/AlphaMonarch-7B
experts:
  - source_model: mlabonne/AlphaMonarch-7B
    positive_prompts:
    - "chat"
    - "assistant"
    - "tell me"
    - "explain"
    - "I want"
  - source_model: beowolx/CodeNinja-1.0-OpenChat-7B
    positive_prompts:
    - "code"
    - "python"
    - "javascript"
    - "programming"
    - "algorithm"
  - source_model: SanjiWatsuki/Kunoichi-DPO-v2-7B
    positive_prompts:
    - "storywriting"
    - "write"
    - "scene"
    - "story"
    - "character"
  - source_model: mlabonne/NeuralDaredevil-7B
    positive_prompts:
    - "reason"
    - "math"
    - "mathematics"
    - "solve"
    - "count"

๐ŸŒณ Model Family Tree

image/png

๐Ÿ’ป Usage

!pip install -qU transformers bitsandbytes accelerate

from transformers import AutoTokenizer
import transformers
import torch

model = "mlabonne/Beyonder-4x7B-v3"

tokenizer = AutoTokenizer.from_pretrained(model)
pipeline = transformers.pipeline(
    "text-generation",
    model=model,
    model_kwargs={"torch_dtype": torch.float16, "load_in_4bit": True},
)

messages = [{"role": "user", "content": "Explain what a Mixture of Experts is in less than 100 words."}]
prompt = pipeline.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])

Output:

A Mixture of Experts (MoE) is a neural network architecture that tackles complex tasks by dividing them into simpler subtasks, delegating each to specialized expert modules. These experts learn to independently handle specific problem aspects. The MoE structure combines their outputs, leveraging their expertise for improved overall performance. This approach promotes modularity, adaptability, and scalability, allowing for better generalization in various applications.

Downloads last month
7,337
Safetensors
Model size
24.2B params
Tensor type
FP16
ยท
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for mlabonne/Beyonder-4x7B-v3

Space using mlabonne/Beyonder-4x7B-v3 1

Collection including mlabonne/Beyonder-4x7B-v3