image/png

Qiskit/granite-8b-qiskit-GGUF

This is the Q4_K_M converted version of the original Qiskit/granite-8b-qiskit. Please refer to the original granite-8b-qiskit model card for more details.

Downloads last month
80
GGUF
Model size
8.05B params
Architecture
llama

4-bit

Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.