Fine-tuned based on Qwen/Qwen2.5-VL-7B-Instruct

基于Qwen/Qwen2.5-VL-7B-Instruct微调

Downloads last month
3,757
GGUF
Model size
7.62B params
Architecture
qwen2

4-bit

16-bit

Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and HF Inference API was unable to determine this model’s pipeline type.

Model tree for WSDW/Qwen2_CN_NSFW_GGUF

Quantized
(7)
this model