--- base_model: - TheDrummer/UnslopNemo-12B-v4 - inflatebot/MN-12B-Mag-Mell-R1 - DavidAU/MN-GRAND-Gutenberg-Lyra4-Lyra-12B-DARKNESS library_name: transformers tags: - mergekit - merge --- # Proper README will be added soon! # fratricide-12B-Unslop-Mell-DARKNESS This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the NuSLERP merge method using [TheDrummer/UnslopNemo-12B-v4](https://huggingface.co./TheDrummer/UnslopNemo-12B-v4) as a base. ### Models Merged The following models were included in the merge: * [inflatebot/MN-12B-Mag-Mell-R1](https://huggingface.co./inflatebot/MN-12B-Mag-Mell-R1) * [DavidAU/MN-GRAND-Gutenberg-Lyra4-Lyra-12B-DARKNESS](https://huggingface.co./DavidAU/MN-GRAND-Gutenberg-Lyra4-Lyra-12B-DARKNESS) ### Configuration The following YAML configuration was used to produce this model: ```yaml models: - model: DavidAU/MN-GRAND-Gutenberg-Lyra4-Lyra-12B-DARKNESS parameters: weight: - filter: self_attn value: [0.4, 0.5, 0.6, 0.5, 0.4] - filter: mlp value: [0.6, 0.7, 0.8, 0.7, 0.6] - value: [0.3, 0.4, 0.5, 0.4, 0.3] - model: inflatebot/MN-12B-Mag-Mell-R1 parameters: weight: - filter: self_attn value: [0.2, 0.3, 0.4, 0.3, 0.2] - filter: mlp value: [0.5, 0.6, 0.7, 0.6, 0.5] - value: [0.4, 0.5, 0.6, 0.5, 0.4] base_model: TheDrummer/UnslopNemo-12B-v4 merge_method: nuslerp dtype: bfloat16 chat_template: "chatml" tokenizer: source: union parameters: normalize: true int8_mask: true ```