L3.1-Moe-4x8B-v0.1 / README.md
kwaabot's picture
Update README.md
95d18cd verified
---
license: llama3.1
library_name: transformers
tags:
- moe
- frankenmoe
- merge
- mergekit
base_model:
- argilla-warehouse/Llama-3.1-8B-MagPie-Ultra
- sequelbox/Llama3.1-8B-PlumCode
- sequelbox/Llama3.1-8B-PlumMath
- ArliAI/Llama-3.1-8B-ArliAI-RPMax-v1.2
model-index:
- name: L3.1-Moe-4x8B-v0.1
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: IFEval (0-Shot)
type: HuggingFaceH4/ifeval
args:
num_few_shot: 0
metrics:
- type: inst_level_strict_acc and prompt_level_strict_acc
value: 43.47
name: strict accuracy
source:
url: https://huggingface.co./spaces/open-llm-leaderboard/open_llm_leaderboard?query=moeru-ai/L3.1-Moe-4x8B-v0.1
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: BBH (3-Shot)
type: BBH
args:
num_few_shot: 3
metrics:
- type: acc_norm
value: 27.86
name: normalized accuracy
source:
url: https://huggingface.co./spaces/open-llm-leaderboard/open_llm_leaderboard?query=moeru-ai/L3.1-Moe-4x8B-v0.1
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MATH Lvl 5 (4-Shot)
type: hendrycks/competition_math
args:
num_few_shot: 4
metrics:
- type: exact_match
value: 11.1
name: exact match
source:
url: https://huggingface.co./spaces/open-llm-leaderboard/open_llm_leaderboard?query=moeru-ai/L3.1-Moe-4x8B-v0.1
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GPQA (0-shot)
type: Idavidrein/gpqa
args:
num_few_shot: 0
metrics:
- type: acc_norm
value: 1.23
name: acc_norm
source:
url: https://huggingface.co./spaces/open-llm-leaderboard/open_llm_leaderboard?query=moeru-ai/L3.1-Moe-4x8B-v0.1
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MuSR (0-shot)
type: TAUR-Lab/MuSR
args:
num_few_shot: 0
metrics:
- type: acc_norm
value: 3.98
name: acc_norm
source:
url: https://huggingface.co./spaces/open-llm-leaderboard/open_llm_leaderboard?query=moeru-ai/L3.1-Moe-4x8B-v0.1
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU-PRO (5-shot)
type: TIGER-Lab/MMLU-Pro
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 27.27
name: accuracy
source:
url: https://huggingface.co./spaces/open-llm-leaderboard/open_llm_leaderboard?query=moeru-ai/L3.1-Moe-4x8B-v0.1
name: Open LLM Leaderboard
---
# L3.1-Moe-4x8B-v0.1
![cover](https://github.com/moeru-ai/L3.1-Moe/blob/main/cover/v0.1.png?raw=true)
This model is a Mixture of Experts (MoE) made with [mergekit-moe](https://github.com/arcee-ai/mergekit/blob/main/docs/moe.md). It uses the following base models:
- [argilla-warehouse/Llama-3.1-8B-MagPie-Ultra](https://huggingface.co./argilla-warehouse/Llama-3.1-8B-MagPie-Ultra)
- [sequelbox/Llama3.1-8B-PlumCode](https://huggingface.co./sequelbox/Llama3.1-8B-PlumCode)
- [sequelbox/Llama3.1-8B-PlumMath](https://huggingface.co./sequelbox/Llama3.1-8B-PlumMath)
- [ArliAI/Llama-3.1-8B-ArliAI-RPMax-v1.2](https://huggingface.co./ArliAI/Llama-3.1-8B-ArliAI-RPMax-v1.2)
Heavily inspired by [mlabonne/Beyonder-4x7B-v3](https://huggingface.co./mlabonne/Beyonder-4x7B-v3).
## Quantized models
### GGUF by [mradermacher](https://huggingface.co./mradermacher)
- [mradermacher/L3.1-Moe-4x8B-v0.1-i1-GGUF](https://huggingface.co./mradermacher/L3.1-Moe-4x8B-v0.1-i1-GGUF)
- [mradermacher/L3.1-Moe-4x8B-v0.1-GGUF](https://huggingface.co./mradermacher/L3.1-Moe-4x8B-v0.1-GGUF)
## Configuration
```yaml
base_model: argilla-warehouse/Llama-3.1-8B-MagPie-Ultra
gate_mode: hidden
dtype: bfloat16
experts:
- source_model: argilla-warehouse/Llama-3.1-8B-MagPie-Ultra
positive_prompts:
- "chat"
- "assistant"
- "tell me"
- "explain"
- "I want"
- source_model: sequelbox/Llama3.1-8B-PlumCode
positive_prompts:
- "code"
- "python"
- "javascript"
- "programming"
- "algorithm"
- source_model: sequelbox/Llama3.1-8B-PlumMath
positive_prompts:
- "reason"
- "math"
- "mathematics"
- "solve"
- "count"
- source_model: ArliAI/Llama-3.1-8B-ArliAI-RPMax-v1.2
positive_prompts:
- "storywriting"
- "write"
- "scene"
- "story"
- "character"
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co./spaces/open-llm-leaderboard/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co./datasets/open-llm-leaderboard/details_moeru-ai__L3.1-Moe-4x8B-v0.1)
| Metric |Value|
|-------------------|----:|
|Avg. |19.15|
|IFEval (0-Shot) |43.47|
|BBH (3-Shot) |27.86|
|MATH Lvl 5 (4-Shot)|11.10|
|GPQA (0-shot) | 1.23|
|MuSR (0-shot) | 3.98|
|MMLU-PRO (5-shot) |27.27|