This is an ExLlamaV2 quantized model in 3.5bpw of BeaverAI/mistral-dory-12b using the default calibration dataset with 8192 context length.
Original Model card:
Dory 12b
redone instruct finetune of mistral nemo 12b. not (E)RP-focused, leave that to drummer.
thanks to twisted for the compute :3
Prompting
alpaca-like:
### System:
[Optional system prompt]
### Instruction:
[Query]
### Response:
[Response]<EOT>
### Instruction:
[...]
Training details
Rank 64 QDoRA, trained on primarily Claude and Gemma 2 multiturn data (it's midnight and I'll probably write more details tomorrow)
- Downloads last month
- 21
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.
Model tree for mpasila/mistral-dory-12b-exl2-3.5bpw
Base model
mistralai/Mistral-Nemo-Base-2407