Magot-v1-Gemma2-8k-9B
This repo contains a merge of pre-trained language models created using mergekit.
This model is an experiment in using merger as a method of making an Instruct-heavy model less constrained in its text generation.
Tested at temp=1, minP=0.01. Coherence is high, though not perfect. The low weight (0.2) infusion of the Magnum model provided needed variety to text generation. This model is being released because it is "good enough" and interesting. Inherent model safety is still strong due to Instruct base, but narratives are less bounded by positivity.
When used, metadata link backs to this model are appreciated. The motivation is curiosity regarding what people do with this.
Merge Details
Merge Method
This model was merged using the SLERP merge method.
Models Merged
The following models were included in the merge:
Configuration
The following YAML configuration was used to produce this model:
slices:
- sources:
- model: grimjim/Kitsunebi-v1-Gemma2-8k-9B
layer_range: [0, 42]
- model: anthracite-org/magnum-v3-9b-customgemma2
layer_range: [0, 42]
merge_method: slerp
base_model: grimjim/Kitsunebi-v1-Gemma2-8k-9B
parameters:
t:
- value: 0.2
dtype: bfloat16
- Downloads last month
- 19