|
--- |
|
license: apache-2.0 |
|
tags: |
|
- moe |
|
- mixtral |
|
- fblgit/UNA-TheBeagle-7b-v1 |
|
- openchat/openchat-3.5-0106 |
|
- azale-ai/Starstreak-7b-beta |
|
- gagan3012/Mistral_arabic_dpo |
|
- davidkim205/komt-mistral-7b-v1 |
|
- OpenBuddy/openbuddy-zephyr-7b-v14.1 |
|
- manishiitg/open-aditi-hi-v1 |
|
- VAGOsolutions/SauerkrautLM-7b-v1-mistral |
|
--- |
|
|
|
# Multirial |
|
|
|
This model is a Mixure of Experts (MoE) made with [mergekit](https://github.com/cg123/mergekit) (mixtral branch). It uses the following base models: |
|
* [fblgit/UNA-TheBeagle-7b-v1](https://huggingface.co./fblgit/UNA-TheBeagle-7b-v1) |
|
* [openchat/openchat-3.5-0106](https://huggingface.co./openchat/openchat-3.5-0106) |
|
* [azale-ai/Starstreak-7b-beta](https://huggingface.co./azale-ai/Starstreak-7b-beta) |
|
* [gagan3012/Mistral_arabic_dpo](https://huggingface.co./gagan3012/Mistral_arabic_dpo) |
|
* [davidkim205/komt-mistral-7b-v1](https://huggingface.co./davidkim205/komt-mistral-7b-v1) |
|
* [OpenBuddy/openbuddy-zephyr-7b-v14.1](https://huggingface.co./OpenBuddy/openbuddy-zephyr-7b-v14.1) |
|
* [manishiitg/open-aditi-hi-v1](https://huggingface.co./manishiitg/open-aditi-hi-v1) |
|
* [VAGOsolutions/SauerkrautLM-7b-v1-mistral](https://huggingface.co./VAGOsolutions/SauerkrautLM-7b-v1-mistral) |
|
|
|
## 🧩 Configuration |
|
|
|
```yamlbase_model: gagan3012/Mistral_arabic_dpo |
|
dtype: bfloat16 |
|
experts: |
|
- positive_prompts: |
|
- chat |
|
- assistant |
|
- tell me |
|
- explain |
|
source_model: fblgit/UNA-TheBeagle-7b-v1 |
|
- positive_prompts: |
|
- chat |
|
- assistant |
|
- tell me |
|
- explain |
|
source_model: openchat/openchat-3.5-0106 |
|
- positive_prompts: |
|
- indonesian |
|
- indonesia |
|
- answer in indonesian |
|
source_model: azale-ai/Starstreak-7b-beta |
|
- positive_prompts: |
|
- arabic |
|
- arab |
|
- arabia |
|
- answer in arabic |
|
source_model: gagan3012/Mistral_arabic_dpo |
|
- positive_prompts: |
|
- korean |
|
- answer in korean |
|
- korea |
|
source_model: davidkim205/komt-mistral-7b-v1 |
|
- positive_prompts: |
|
- chinese |
|
- china |
|
- answer in chinese |
|
source_model: OpenBuddy/openbuddy-zephyr-7b-v14.1 |
|
- positive_prompts: |
|
- hindi |
|
- india |
|
- hindu |
|
- answer in hindi |
|
source_model: manishiitg/open-aditi-hi-v1 |
|
- positive_prompts: |
|
- german |
|
- germany |
|
- answer in german |
|
- deutsch |
|
source_model: VAGOsolutions/SauerkrautLM-7b-v1-mistral |
|
gate_mode: hidden |
|
``` |
|
|
|
## 💻 Usage |
|
|
|
```python |
|
!pip install -qU transformers bitsandbytes accelerate |
|
|
|
from transformers import AutoTokenizer |
|
import transformers |
|
import torch |
|
|
|
model = "gagan3012/Multirial" |
|
|
|
tokenizer = AutoTokenizer.from_pretrained(model) |
|
pipeline = transformers.pipeline( |
|
"text-generation", |
|
model=model, |
|
model_kwargs={"torch_dtype": torch.float16, "load_in_4bit": True}, |
|
) |
|
|
|
messages = [{"role": "user", "content": "Explain what a Mixture of Experts is in less than 100 words."}] |
|
prompt = pipeline.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) |
|
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) |
|
print(outputs[0]["generated_text"]) |
|
``` |