This is a repository of GGUF Quants for DareBeagel-2x7B
Original Model Available Here: https://huggingface.co./shadowml/DareBeagel-2x7B
Available Quants
- Q8_0
- Q6_K
- Q5_K_M
- Q5_K_S
- Q4_K_M
- Q4_K_S
- Q3_K_M
- Q3_K_S
- Q2_K
Beyonder-2x7B-v2
Beyonder-2x7B-v2 is a Mixure of Experts (MoE) made with the following models using LazyMergekit:
𧩠Configuration
base_model: mlabonne/NeuralBeagle14-7B
gate_mode: random
experts:
- source_model: mlabonne/NeuralBeagle14-7B
positive_prompts: [""]
- source_model: mlabonne/NeuralDaredevil-7B
positive_prompts: [""]
π» Usage
Load in Kobold.cpp or whatever.
I found Alpaca (and Alpaca-ish) prompts worked well.
Settings that worked good for me are:
Min P - 0.1
Dynamic Temperature Min 0 Max 3
Rep Pen 1.03
Rep Pen Range 1000
- Downloads last month
- 125