This repository contains quantized versions of BramVanroy/fietje-2.

Available quantization types and expected performance differences compared to base f16, higher perplexity=worse (from llama.cpp):

Q3_K_M  :  3.07G, +0.2496 ppl @ LLaMA-v1-7B
Q4_K_M  :  3.80G, +0.0532 ppl @ LLaMA-v1-7B
Q5_K_M  :  4.45G, +0.0122 ppl @ LLaMA-v1-7B
Q6_K    :  5.15G, +0.0008 ppl @ LLaMA-v1-7B
Q8_0    :  6.70G, +0.0004 ppl @ LLaMA-v1-7B
F16     : 13.00G              @ 7B

Also available on ollama.

Quants were made with release b2777 of llama.cpp.

Downloads last month
142
GGUF
Model size
2.78B params
Architecture
phi2

3-bit

4-bit

5-bit

6-bit

8-bit

16-bit

Inference API
Unable to determine this model's library. Check the docs .

Collection including BramVanroy/fietje-2-gguf