Configuration Parsing
Warning:
In config.json: "quantization_config.bits" must be an integer
DolphinVision 72b - 3.5bpw EXL2 🐬
Base model: cognitivecomputations/dolphin-vision-72b
Language model quantized to 3.5bpw with FP16 vision layers merged back in.
Text working in exllamav2/tabbyapi. Vision input not working yet.
n.b. architecture in config.json has been changed from "BunnyQwenForCausalLM" to "Qwen2ForCausalLM" to prevent model from being loaded as llama in tabbyapi.
- Downloads last month
- 22
Model tree for nintwentydo/dolphin-vision-72b-3.5bpw-h6-exl2
Base model
Qwen/Qwen2-72B
Finetuned
cognitivecomputations/dolphin-vision-72b