16-bit gguf version of https://huggingface.co./meta-llama/Llama-2-7b-chat-hf
For quantized versions, see https://huggingface.co./models?search=thebloke/llama-2-7b-chat
- Downloads last month
- 5
Model tree for pcuenq/Llama-2-7b-chat-gguf
Base model
meta-llama/Llama-2-7b-chat-hf