Llama-3-Chinese-8B-Instruct-v2-GGUF
This repository contains Llama-3-Chinese-8B-Instruct-v2-GGUF (llama.cpp/ollama/tgw, etc. compatible), which is the quantized version of Llama-3-Chinese-8B-Instruct-v2.
Note: this is an instruction (chat) model, which can be used for conversation, QA, etc.
Further details (performance, usage, etc.) should refer to GitHub project page: https://github.com/ymcui/Chinese-LLaMA-Alpaca-3
Performance
Metric: PPL, lower is better
Note: PPL for v2 models are higher than v1, as the v2's base model (Meta-Llama-3-8B-Instruct) also has a larger PPL than v1's (Meta-Llama-3-8B).
Quant | Size | PPL |
---|---|---|
Q2_K | 2.96 GB | 13.2488 +/- 0.17217 |
Q3_K | 3.74 GB | 6.9618 +/- 0.08420 |
Q4_0 | 4.34 GB | 6.8925 +/- 0.08437 |
Q4_K | 4.58 GB | 6.4851 +/- 0.07892 |
Q5_0 | 5.21 GB | 6.4608 +/- 0.07862 |
Q5_K | 5.34 GB | 6.3742 +/- 0.07740 |
Q6_K | 6.14 GB | 6.3494 +/- 0.07703 |
Q8_0 | 7.95 GB | 6.3110 +/- 0.07673 |
F16 | 14.97 GB | 6.3005 +/- 0.07658 |
Others
For full model, please see: https://huggingface.co./hfl/llama-3-chinese-8b-instruct-v2
For LoRA-only model, please see: https://huggingface.co./hfl/llama-3-chinese-8b-instruct-v2-lora
If you have questions/issues regarding this model, please submit an issue through https://github.com/ymcui/Chinese-LLaMA-Alpaca-3
- Downloads last month
- 511