Working Merge in my Profile
Collection
25 items
β’
Updated
β’
1
This is a merge of pre-trained language models created using mergekit.
This model was merged using the linear DARE merge method using Orenguteng/Llama-3.1-8B-Lexi-Uncensored as a base.
The following models were included in the merge:
The following YAML configuration was used to produce this model:
merge_method: dare_linear
models:
- model: Orenguteng/Llama-3.1-8B-Lexi-Uncensored
parameters:
weight:
- filter: v_proj
value: [1, 1, 0, 0, 0, 0, 0, 0, 0, 1, 1]
- filter: o_proj
value: [1, 1, 0, 0, 0, 0, 0, 0, 0, 1, 1]
- filter: up_proj
value: [1, 1, 0, 0, 0, 0, 0, 0, 0, 1, 1]
- filter: gate_proj
value: [1, 1, 0, 0, 0, 0, 0, 0, 0, 1, 1]
- filter: down_proj
value: [1, 1, 0, 0, 0, 0, 0, 0, 0, 1, 1]
- value: 1
- model: djuna/L3-Suze-Vume
parameters:
weight:
- filter: v_proj
value: [0, 0, 1, 1, 1, 1, 1, 1, 1, 0, 0]
- filter: o_proj
value: [0, 0, 1, 1, 1, 1, 1, 1, 1, 0, 0]
- filter: up_proj
value: [0, 0, 1, 1, 1, 1, 1, 1, 1, 0, 0]
- filter: gate_proj
value: [0, 0, 1, 1, 1, 1, 1, 1, 1, 0, 0]
- filter: down_proj
value: [0, 0, 1, 1, 1, 1, 1, 1, 1, 0, 0]
- value: 0
base_model: Orenguteng/Llama-3.1-8B-Lexi-Uncensored
tokenizer_source: base
dtype: float32
out_dtype: bfloat16
Detailed results can be found here
Metric | Value |
---|---|
Avg. | 25.75 |
IFEval (0-Shot) | 72.97 |
BBH (3-Shot) | 31.14 |
MATH Lvl 5 (4-Shot) | 9.89 |
GPQA (0-shot) | 4.25 |
MuSR (0-shot) | 8.30 |
MMLU-PRO (5-shot) | 27.94 |