roleplaiapp/Omni-Reasoner-2B-Q6_K-GGUF

Repo: roleplaiapp/Omni-Reasoner-2B-Q6_K-GGUF
Original Model: Omni-Reasoner-o1 Organization: prithivMLmods Quantized File: omni-reasoner-2b-q6_k.gguf Quantization: GGUF Quantization Method: Q6_K
Use Imatrix: False
Split Model: False

Overview

This is an GGUF Q6_K quantized version of Omni-Reasoner-o1.

Quantization By

I often have idle A100 GPUs while building/testing and training the RP app, so I put them to use quantizing models. I hope the community finds these quantizations useful.

Andrew Webby @ RolePlai

Downloads last month
21
GGUF
Model size
1.54B params
Architecture
qwen2vl

6-bit

Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and the HF Inference API does not support transformers models with pipeline type image-text-to-text

Model tree for roleplaiapp/Omni-Reasoner-2B-Q6_K-GGUF

Base model

Qwen/Qwen2-VL-2B
Quantized
(33)
this model