This Model is most likely broken.
- This Discussion shows there's a token leak in a similar model. I forgot to specify a union tokenizer, although I don't know whether that's the exact cause.
- I've released v2 here: redrix/matricide-12B-Unslop-Unleashed-v2
- This version will be left up for archival purposes. But may get deleted if it's obtrusive.
matricide-12B-Unslop-Unleashed
This is a merge of pre-trained language models created using mergekit.
This is my second merge. The goal was to introduce UnslopNemo to NemoMix to help combat the GPTisms Nemomix suffers under. Now that NuSLERP has been released, I might redo this and patricide with adjusted parameters.
Testing stage: early testing
I do not know how this model holds up over long term context. Early testing showed stability and viable answers.
I am working on v2, as such I won't bother with further testing.
Parameters
- Context size: Not more than 20k recommended - coherency may degrade.
- Chat Template: ChatML; Metharme/Pygmalion (as per UnslopNemo) may work, but effects are untested
- Samplers: A Temperature-Last of 1 and Min-P of 0.1 are viable, but haven't been finetuned. Activate DRY if repetition appears. XTC is untested.
Quantization
I am working on v2, as such I won't bother with quantizations. Try my other models!
Merge Details
Merge Method
This model was merged using the SLERP merge method.
Models Merged
The following models were included in the merge:
Configuration
The following YAML configuration was used to produce this model:
models:
- model: MarinaraSpaghetti/NemoMix-Unleashed-12B
- model: TheDrummer/UnslopNemo-12B-v4.1
base_model: TheDrummer/UnslopNemo-12B-v4.1
merge_method: slerp
dtype: bfloat16
tokenizer_source: "MarinaraSpaghetti/NemoMix-Unleashed-12B"
chat_template: "chatml"
parameters:
t: [0.2, 0.7, 0.9, 0.8, 0.3]