Configuration Parsing Warning: In config.json: "quantization_config.bits" must be an integer

DolphinVision 72b - 3.5bpw EXL2 🐬

Base model: cognitivecomputations/dolphin-vision-72b

Language model quantized to 3.5bpw with FP16 vision layers merged back in.

Text working in exllamav2/tabbyapi. Vision input not working yet.

n.b. architecture in config.json has been changed from "BunnyQwenForCausalLM" to "Qwen2ForCausalLM" to prevent model from being loaded as llama in tabbyapi.

Downloads last month
22
Safetensors
Model size
9.68B params
Tensor type
I32
·
BF16
·
FP16
·
I16
·
Inference API
Unable to determine this model's library. Check the docs .

Model tree for nintwentydo/dolphin-vision-72b-3.5bpw-h6-exl2

Base model

Qwen/Qwen2-72B
Quantized
(2)
this model

Datasets used to train nintwentydo/dolphin-vision-72b-3.5bpw-h6-exl2