EDIT:
Always check my space for the latest benchmark results for my models!
IMPORTANT NOTE | READ ME!
This model uses udkai/Turdus which may produce inaccurate results for the Winogrande evaluation scores. The following is a quote directly taken from that models page:
- "A less contaminated version of udkai/Garrulus and the second model to be discussed in the paper Subtle DPO-Contamination with modified Winogrande increases TruthfulQA, Hellaswag & ARC."
- "Subtle DPO-Contamination with modified Winogrande causes the average accuracy of all 5-non Winogrande metrics (e.g. including also MMLU and GSM8K) to be 0.2% higher than the underlying model."
In my understanding the Winogrande scores are only slightly influenced by the DPO-Contamination, that has the "side-effect" of increasing the scores on the other benchmarks. Since the effect on the Winogrande scores was subtle in the udkai/Turdus benchmarking results, and this model combines it with other models (probably making this effect even less pronounced), I still believe that this model can be of value to the community as it's overall performance is quite impressive. However I do not want to mislead anybody or produce any unfair scores, hence this note! The full training configuration is also fully transparant and can be found below.
I Hope this model will prove useful to somebody. There's GGUF versions available here for inference: https://huggingface.co./CultriX/MergeTrix-7B-GGUF. I personally tested them and found them to produce very pleasing results.
Kind regards, CultriX
PERSONAL DISCLAIMER
(This is probably a good moment to point out that I'm an amateur doing this for fun and am by no means an IT professional or data scientist. Therefore my understanding of these topics might be incomplete, missing or simply completely wrong in turn causing me to make inaccurate claims. If you notice that's the case I invite you to notify me of my mistakes so that I can rectify any potential inaccuracies as soon as possible. Thanks for understanding!) I Hope this model will prove useful to somebody. There's GGUF versions available here for inference: https://huggingface.co./CultriX/MergeTrix-7B-GGUF
Shoutout
Once again, a major thank you and shoutout to @mlabonne for his amazing article that I used to produce this result which can be found here: https://towardsdatascience.com/merge-large-language-models-with-mergekit-2118fb392b54 My other model, CultriX/MistralTrix-v1, was based on another great article from the same guy, which can be found here: https://towardsdatascience.com/fine-tune-a-mistral-7b-model-with-direct-preference-optimization-708042745aac (I hope he doesn't mind me using his own articles to beat him on the LeaderBoards for the second time this week... Like last time, all credit should be directed at him really!) es to beat him on the LeaderBoards for the second time this week... Like last time, all credit should be directed at him really!)
MODEL INFORMATION:
NAME: MergeTrix-7B
MergeTrix-7B is a merge of the following models using LazyMergekit:
𧩠Configuration
models:
- model: udkai/Turdus
# No parameters necessary for base model
- model: abideen/NexoNimbus-7B
parameters:
density: 0.53
weight: 0.4
- model: fblgit/UNA-TheBeagle-7b-v1
parameters:
density: 0.53
weight: 0.3
- model: argilla/distilabeled-Marcoro14-7B-slerp
parameters:
density: 0.53
weight: 0.3
merge_method: dare_ties
base_model: udkai/Turdus
parameters:
int8_mask: true
dtype: bfloat16
π» Usage
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "CultriX/MergeTrix-7B"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
- Downloads last month
- 89