This language model is a merged version of several pre-trained models, designed to excel in roleplay, long-form question answering, and prompt following tasks. It was created using the TIES merge method with huihui-ai/Llama-3.2-3B-Instruct-abliterated as the base model.
Intended Use:
This model is suitable for a variety of applications, including:
- Creative Writing: Generating stories, poems, scripts, and other forms of creative text.
- Question Answering: Providing comprehensive and informative answers to a wide range of questions.
- Role-Playing: Engaging in interactive role-playing scenarios with users.
- Prompt Following: Completing tasks and generating text based on specific prompts or instructions.
merge
This is a merge of pre-trained language models created using mergekit.
Merge Details
Merge Method
This model was merged using the TIES merge method using huihui-ai/Llama-3.2-3B-Instruct-abliterated as a base.
Models Merged
The following models were included in the merge:
Configuration
The following YAML configuration was used to produce this model:
models:
- model: bunnycore/Llama-3.2-3B-Long-Think
parameters:
density: 0.5
weight: 0.5
- model: bunnycore/Llama-3.2-3B-Pure-RP
parameters:
density: 0.5
weight: 0.5
merge_method: ties
base_model: huihui-ai/Llama-3.2-3B-Instruct-abliterated
parameters:
normalize: false
int8_mask: true
dtype: float16
Open LLM Leaderboard Evaluation Results
Detailed results can be found here
Metric | Value |
---|---|
Avg. | 21.40 |
IFEval (0-Shot) | 64.04 |
BBH (3-Shot) | 23.78 |
MATH Lvl 5 (4-Shot) | 12.69 |
GPQA (0-shot) | 1.57 |
MuSR (0-shot) | 2.75 |
MMLU-PRO (5-shot) | 23.56 |
- Downloads last month
- 83
Inference Providers
NEW
This model is not currently available via any of the supported third-party Inference Providers, and
the model is not deployed on the HF Inference API.
Model tree for bunnycore/Llama-3.2-3B-Mix-Skill
Merge model
this model
Evaluation results
- strict accuracy on IFEval (0-Shot)Open LLM Leaderboard64.040
- normalized accuracy on BBH (3-Shot)Open LLM Leaderboard23.780
- exact match on MATH Lvl 5 (4-Shot)Open LLM Leaderboard12.690
- acc_norm on GPQA (0-shot)Open LLM Leaderboard1.570
- acc_norm on MuSR (0-shot)Open LLM Leaderboard2.750
- accuracy on MMLU-PRO (5-shot)test set Open LLM Leaderboard23.560