NeuralBeagle14-7B-GGUF

Original Model

mlabonne/NeuralBeagle14-7B

Run with LlamaEdge

Note that the original model has an potential issue (see the discussion: Model follows ChatML format, but does not have the special tokens for ChatML).

Quantized GGUF Models

Name Quant method Bits Size Use case
NeuralBeagle14-7B-Q2_K.gguf Q2_K 2 2.72 GB smallest, significant quality loss - not recommended for most purposes
NeuralBeagle14-7B-Q3_K_L.gguf Q3_K_L 3 3.82 GB small, substantial quality loss
NeuralBeagle14-7B-Q3_K_M.gguf Q3_K_M 3 3.52 GB very small, high quality loss
NeuralBeagle14-7B-Q3_K_S.gguf Q3_K_S 3 3.16 GB very small, high quality loss
NeuralBeagle14-7B-Q4_0.ggufs Q4_0 4 4.11 GB legacy; small, very high quality loss - prefer using Q3_K_M
NeuralBeagle14-7B-Q4_K_M.gguf Q4_K_M 4 4.37 GB medium, balanced quality - recommended
NeuralBeagle14-7B-Q4_K_S.gguf Q4_K_S 4 4.14 GB small, greater quality loss
NeuralBeagle14-7B-Q5_0.gguf Q5_0 5 5.00 GB legacy; medium, balanced quality - prefer using Q4_K_M
NeuralBeagle14-7B-Q5_K_M.gguf Q5_K_M 5 5.13 GB large, very low quality loss - recommended
NeuralBeagle14-7B-Q5_K_S.gguf Q5_K_S 5 5.00 GB large, low quality loss - recommended
NeuralBeagle14-7B-Q6_K.gguf Q6_K 6 5.94 GB very large, extremely low quality loss
NeuralBeagle14-7B-Q8_0.gguf Q8_0 8 7.70 GB very large, extremely low quality loss - not recommended
Downloads last month
33
GGUF
Model size
7.24B params
Architecture
llama

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

Inference Examples
Inference API (serverless) has been turned off for this model.

Model tree for second-state/NeuralBeagle14-7B-GGUF

Quantized
(10)
this model