Gemma-2-Ataraxy-v2f-9B
Another test model you should probably ignore.
This is a merge of pre-trained language models created using mergekit.
Merge Details
Merge Method
This model was merged using the della merge method using unsloth/gemma-2-9b-it as a base.
Models Merged
The following models were included in the merge:
Configuration
The following YAML configuration was used to produce this model:
base_model: unsloth/gemma-2-9b-it
dtype: bfloat16
merge_method: della
parameters:
epsilon: 0.1
int8_mask: 1.0
lambda: 1.0
normalize: 1.0
slices:
- sources:
- layer_range: [0, 42]
model: unsloth/gemma-2-9b-it
- layer_range: [0, 42]
model: jsgreenawalt/gemma-2-9B-it-advanced-v2.1
parameters:
density: 0.55
weight: 0.6
- layer_range: [0, 42]
model: lemon07r/Gemma-2-Ataraxy-9B
parameters:
density: 0.35
weight: 0.6
- layer_range: [0, 42]
model: ifable/gemma-2-Ifable-9B
parameters:
density: 0.25
weight: 0.4
Open LLM Leaderboard Evaluation Results
Detailed results can be found here
Metric | Value |
---|---|
Avg. | 18.77 |
IFEval (0-Shot) | 37.91 |
BBH (3-Shot) | 31.42 |
MATH Lvl 5 (4-Shot) | 0.00 |
GPQA (0-shot) | 11.86 |
MuSR (0-shot) | 3.59 |
MMLU-PRO (5-shot) | 27.81 |
- Downloads last month
- 7
Inference Providers
NEW
This model is not currently available via any of the supported third-party Inference Providers, and
the model is not deployed on the HF Inference API.
Model tree for lemon07r/Gemma-2-Ataraxy-v2f-9B
Merge model
this model
Evaluation results
- strict accuracy on IFEval (0-Shot)Open LLM Leaderboard37.910
- normalized accuracy on BBH (3-Shot)Open LLM Leaderboard31.420
- exact match on MATH Lvl 5 (4-Shot)Open LLM Leaderboard0.000
- acc_norm on GPQA (0-shot)Open LLM Leaderboard11.860
- acc_norm on MuSR (0-shot)Open LLM Leaderboard3.590
- accuracy on MMLU-PRO (5-shot)test set Open LLM Leaderboard27.810