metadata
base_model:
- Sao10K/Fimbulvetr-11B-v2
- Undi95/Mistral-11B-CC-Air-RP
library_name: transformers
tags:
- mergekit
- merge
- π
Fimbul-Airo-18B π
Thanks to @mradermacher for his excellent quants! You can find his GGUFs for this repo here.
This is a merge of pre-trained language models created using mergekit. π
I tested it for thirtneen.second π
Works pretty good. Also seems to be happy when ROPEing up to 8K. Uncensored, told me how to build a nuke.
Merge Details
Merge Method
This model was merged using the passthrough merge method. Taking models and smashing em all together π
Models Merged
The following models were included in the merge:
- Sao10K/Fimbulvetr-11B-v2 π
- Undi95/Mistral-11B-CC-Air-RP π
- CollectiveCognition-v1.1-Mistral-7B
- airoboros-mistral2.2-7b
- PIPPA dataset 11B qlora
- LimaRPv3 dataset 11B qlora
The Sauce
The following YAML configuration was used to produce this model:
slices:
- sources:
- model: Sao10K/Fimbulvetr-11B-v2
layer_range: [0, 40]
- sources:
- model: Undi95/Mistral-11B-CC-Air-RP
layer_range: [8, 48]
merge_method: passthrough
dtype: bfloat16
π
Prompt Format: Alpaca π
### Instruction:
<Prompt>
### Input:
<Insert Context Here>
### Response:
π
Less silly models are in the works. I'm still figuring things out right now, so don't judge the bazillions of readme edits and other goofiness.
Don't forget to take care of yourself and have a wonderful day!