Model Summary

This repository hosts quantized versions of the Phi-4-mini-instruct model.

Format: GGUF
Converter: llama.cpp 06c2b1561d8b882bc018554591f8c35eb04ad30e
Quantizer: LM-Kit.NET 2025.3.1

For more detailed information on the base model, please visit the following link

Downloads last month
493
GGUF
Model size
3.84B params
Architecture
phi3

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

16-bit

Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The model has no library tag.