NemoReRemix-12B / README.md
MarinaraSpaghetti's picture
Update README.md
9ebc7c2 verified
---
base_model: []
library_name: transformers
tags:
- mergekit
- merge
---
![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/6550b16f7490049d6237f200/p0ZBFYc1RNoYcowv3Nj40.jpeg)
![image/png](https://cdn-uploads.huggingface.co/production/uploads/6550b16f7490049d6237f200/peSTK513q1WOfRfRzWga4.png)
# Information
## Details
Improved NemoRemix for storytelling and roleplay. Plus, this one can also be used as a general assistant model. The prose is pretty much the same, but it was made smarter, thanks to the addition of the amazing Migtissera's Tess model. I yeeted out Gryphe's Pantheon-RP, though, because it was trained with asterisks in mind, unlike the rest of the models in the merge, which caused it to mess the formatting from time to time; this one doesn't do that anymore. Hooray! All credits and thanks go to the amazing Migtissera, MistralAI, Anthracite, Sao10K and ShuttleAI for their amazing models.
## Instruct
ChatML but Mistral Instruct should work too (theoretically). Important: remember to add <|im_end|> to custom stopping strings, otherwise it will appear in the output.
```
<|im_start|>system
{system}<|im_end|>
<|im_start|>user
{message}<|im_end|>
<|im_start|>assistant
{response}<|im_end|>
```
## Parameters
I recommend running Temperature 1.0-1.2 with 0.1 Top A or 0.01-0.1 Min P, and with 0.8/1.75/2/0 DRY. Also works with lower Temperatures below 1.0. Nothing more needed.
### Settings
You can use my exact settings from here (use the ones from the ChatML Base/Customized folder): https://huggingface.co./MarinaraSpaghetti/SillyTavern-Settings/tree/main.
## GGUF
https://huggingface.co./MarinaraSpaghetti/NemoReRemix-GGUF
# NemoReRemix-12B
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the della_linear merge method using E:\mergekit\mistralaiMistral-Nemo-Base-2407 as a base.
### Models Merged
The following models were included in the merge:
* E:\mergekit\Sao10K_MN-12B-Lyra-v1
* E:\mergekit\mistralaiMistral-Nemo-Instruct-2407
* E:\mergekit\migtissera_Tess-3-Mistral-Nemo
* E:\mergekit\shuttleai_shuttle-2.5-mini
* E:\mergekit\anthracite-org_magnum-12b-v2
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: E:\mergekit\mistralaiMistral-Nemo-Instruct-2407
parameters:
weight: 0.1
density: 0.4
- model: E:\mergekit\Sao10K_MN-12B-Lyra-v1
parameters:
weight: 0.12
density: 0.5
- model: E:\mergekit\shuttleai_shuttle-2.5-mini
parameters:
weight: 0.2
density: 0.6
- model: E:\mergekit\migtissera_Tess-3-Mistral-Nemo
parameters:
weight: 0.25
density: 0.7
- model: E:\mergekit\anthracite-org_magnum-12b-v2
parameters:
weight: 0.33
density: 0.8
merge_method: della_linear
base_model: E:\mergekit\mistralaiMistral-Nemo-Base-2407
parameters:
epsilon: 0.05
lambda: 1
dtype: bfloat16
```
# Ko-fi
## Enjoying what I do? Consider donating here, thank you!
https://ko-fi.com/spicy_marinara