merge
This is a merge of pre-trained language models created using mergekit.
Merge Details
Merge Method
This model was merged using the TIES merge method using huihui-ai/Llama-3.2-3B-Instruct-abliterated as a base.
Models Merged
The following models were included in the merge:
- bunnycore/Llama-3.2-3B-CodeReactor
- bunnycore/Llama-3.2-3B-Mix-Skill
- bunnycore/Llama-3.2-3B-All-Mix
- bunnycore/Llama-3.2-3B-ProdigyPlusPlus
- bunnycore/Llama-3.2-3B-Sci-Think
- bunnycore/Llama-3.2-3B-Long-Think
- bunnycore/Llama-3.2-3B-Booval
- bunnycore/Llama-3.2-3B-ProdigyPlus
- bunnycore/Llama-3.2-3B-Pure-RP
Configuration
The following YAML configuration was used to produce this model:
models:
- model: bunnycore/Llama-3.2-3B-Sci-Think
parameters:
density: 0.5
weight: 0.5
- model: bunnycore/Llama-3.2-3B-CodeReactor
parameters:
density: 0.5
weight: 0.5
- model: bunnycore/Llama-3.2-3B-Long-Think
parameters:
density: 0.5
weight: 0.5
- model: bunnycore/Llama-3.2-3B-Pure-RP
parameters:
density: 0.5
weight: 0.5
- model: bunnycore/Llama-3.2-3B-Booval
parameters:
density: 0.5
weight: 0.5
- model: bunnycore/Llama-3.2-3B-Mix-Skill
parameters:
density: 0.5
weight: 0.5
- model: bunnycore/Llama-3.2-3B-ProdigyPlus
parameters:
density: 0.5
weight: 0.5
- model: bunnycore/Llama-3.2-3B-All-Mix
parameters:
density: 0.5
weight: 0.5
- model: bunnycore/Llama-3.2-3B-ProdigyPlusPlus
parameters:
density: 0.5
weight: 0.5
merge_method: ties
base_model: huihui-ai/Llama-3.2-3B-Instruct-abliterated
parameters:
normalize: false
int8_mask: true
dtype: float16
Open LLM Leaderboard Evaluation Results
Detailed results can be found here
Metric | Value |
---|---|
Avg. | 6.44 |
IFEval (0-Shot) | 16.45 |
BBH (3-Shot) | 11.56 |
MATH Lvl 5 (4-Shot) | 2.95 |
GPQA (0-shot) | 0.45 |
MuSR (0-shot) | 1.70 |
MMLU-PRO (5-shot) | 5.56 |
- Downloads last month
- 66
Inference Providers
NEW
This model is not currently available via any of the supported third-party Inference Providers, and
the model is not deployed on the HF Inference API.
Model tree for bunnycore/Llama-3.2-3B-ProdigyPlusPlus
Merge model
this model
Evaluation results
- strict accuracy on IFEval (0-Shot)Open LLM Leaderboard16.450
- normalized accuracy on BBH (3-Shot)Open LLM Leaderboard11.560
- exact match on MATH Lvl 5 (4-Shot)Open LLM Leaderboard2.950
- acc_norm on GPQA (0-shot)Open LLM Leaderboard0.450
- acc_norm on MuSR (0-shot)Open LLM Leaderboard1.700
- accuracy on MMLU-PRO (5-shot)test set Open LLM Leaderboard5.560