Edit model card

Llamacpp Quantizations of starchat2-15b-v0.1

Using llama.cpp release b2405 for quantization.

Original model: https://huggingface.co./HuggingFaceH4/starchat2-15b-v0.1

Download a file (not the whole branch) from below:

Filename Quant type File Size Description
starchat2-15b-v0.1-Q8_0.gguf Q8_0 16.96GB Extremely high quality, generally unneeded but max available quant.
starchat2-15b-v0.1-Q6_K.gguf Q6_K 13.10GB Very high quality, near perfect, recommended.
starchat2-15b-v0.1-Q5_K_M.gguf Q5_K_M 11.43GB High quality, very usable.
starchat2-15b-v0.1-Q5_K_S.gguf Q5_K_S 11.02GB High quality, very usable.
starchat2-15b-v0.1-Q5_0.gguf Q5_0 11.02GB High quality, older format, generally not recommended.
starchat2-15b-v0.1-Q4_K_M.gguf Q4_K_M 9.86GB Good quality, similar to 4.25 bpw.
starchat2-15b-v0.1-Q4_K_S.gguf Q4_K_S 9.25GB Slightly lower quality with small space savings.
starchat2-15b-v0.1-Q4_0.gguf Q4_0 9.06GB Decent quality, older format, generally not recommended.
starchat2-15b-v0.1-Q3_K_L.gguf Q3_K_L 8.96GB Lower quality but usable, good for low RAM availability.
starchat2-15b-v0.1-Q3_K_M.gguf Q3_K_M 8.10GB Even lower quality.
starchat2-15b-v0.1-Q3_K_S.gguf Q3_K_S 6.98GB Low quality, not recommended.
starchat2-15b-v0.1-Q2_K.gguf Q2_K 6.19GB Extremely low quality, not recommended.

Want to support my work? Visit my ko-fi page here: https://ko-fi.com/bartowski

Downloads last month
137
GGUF
Model size
16B params
Architecture
starcoder2

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

Inference Examples
Unable to determine this model's library. Check the docs .

Model tree for bartowski/starchat2-15b-v0.1-GGUF

Quantized
(3)
this model

Datasets used to train bartowski/starchat2-15b-v0.1-GGUF