Edit model card

Llamacpp Quantizations of Meta-Llama-3.1-8B

Using llama.cpp release b3583 for quantization.

Original model: https://huggingface.co./google/gemma-2-2b

Download a file (not the whole branch) from below:

Filename Quant type File Size Perplexity (wikitext-2-raw-v1.test)
gemma-2-2b.FP32.gguf FP32 10.50GB 8.9236 +/- 0.06373
gemma-2-2b-Q8_0.gguf Q8_0 2.78GB 8.9299 +/- 0.06377
gemma-2-2b-Q6_K.gguf Q6_K 2.15GB 8.9570 +/- 0.06404
gemma-2-2b-Q5_K_M.gguf Q5_K_M 1.92GB 9.0061 +/- 0.06461
gemma-2-2b-Q5_K_S.gguf Q5_K_S 1.88GB 9.0096 +/- 0.06451
gemma-2-2b-Q4_K_M.gguf Q4_K_M 1.71GB 9.2260 +/- 0.06643
gemma-2-2b-Q4_K_S.gguf Q4_K_S 1.64GB 9.3116 +/- 0.06726
gemma-2-2b-Q3_K_L.gguf Q3_K_L 1.55GB 9.5683 +/- 0.06909
gemma-2-2b-Q3_K_M.gguf Q3_K_M 1.46GB 9.7759 +/- 0.07120
gemma-2-2b-Q3_K_S.gguf Q3_K_S 1.36GB 10.8067 +/- 0.08032
gemma-2-2b-Q2_K.gguf Q2_K 1.23GB 13.8994 +/- 0.10723

Benchmark Results

Results have been computed using:

hellaswage_val_full

winogrande-debiased-eval

mmlu-validation

Benchmark Quant type Metric
WinoGrande (0-shot) Q8_0 68.3504 +/- 1.3072
WinoGrande (0-shot) Q4_K_M 67.5612 +/- 1.3157
WinoGrande (0-shot) Q3_K_M 65.9037 +/- 1.3323
WinoGrande (0-shot) Q3_K_S 66.6930 +/- 1.3246
WinoGrande (0-shot) Q2_K 63.2991 +/- 1.3546
HellaSwag (0-shot) Q8_0 71.25074686
HellaSwag (0-shot) Q4_K_M 69.95618403
HellaSwag (0-shot) Q3_K_M 68.00438160
HellaSwag (0-shot) Q3_K_S 69.95618403
HellaSwag (0-shot) Q2_K 59.38060147
MMLU (0-shot) Q8_0 35.5943 +/- 1.2173
MMLU (0-shot) Q4_K_M 35.5943 +/- 1.2173
MMLU (0-shot) Q3_K_M 35.2067 +/- 1.2143
MMLU (0-shot) Q3_K_S 33.9147 +/- 1.2037
MMLU (0-shot) Q2_K 33.0749 +/- 1.1962

Downloading using huggingface-cli

First, make sure you have hugginface-cli installed:

pip install -U "huggingface_hub[cli]"

Then, you can target the specific file you want:

huggingface-cli download fedric95/gemma-2-2b-GGUF --include "gemma-2-2b-Q4_K_M.gguf" --local-dir ./

If the model is bigger than 50GB, it will have been split into multiple files. In order to download them all to a local folder, run:

huggingface-cli download fedric95/gemma-2-2b-GGUF --include "gemma-2-2b-Q8_0.gguf/*" --local-dir gemma-2-2b-Q8_0

You can either specify a new local-dir (gemma-2-2b-Q8_0) or download them all in place (./)

Reproducibility

https://github.com/ggerganov/llama.cpp/discussions/9020#discussioncomment-10335638

Downloads last month
449
GGUF
Model size
2.61B params
Architecture
gemma2

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for fedric95/gemma-2-2b-GGUF

Base model

google/gemma-2-2b
Quantized
(33)
this model