Falcon3-Instruct GGUF Models
Collection
LlamaEdge compatible quants for Falcon3-Instruct models.
•
4 items
•
Updated
Prompt template
Prompt type: falcon3
Prompt string
<|system|>
You are a helpful friendly assistant Falcon3 from TII, try to follow instructions as much as possible.
<|user|>
{user_message}
<|assistant|>
Context size: 32000
Run as LlamaEdge service
wasmedge --dir .:. --nn-preload default:GGML:AUTO:Falcon3-10B-Instruct-Q5_K_M.gguf \
llama-api-server.wasm \
--model-name Falcon3-10B-Instruct \
--prompt-template falcon3 \
--ctx-size 32000
Run as LlamaEdge command app
wasmedge --dir .:. --nn-preload default:GGML:AUTO:Falcon3-10B-Instruct-Q5_K_M.gguf \
llama-chat.wasm \
--prompt-template falcon3 \
--ctx-size 32000
Name | Quant method | Bits | Size | Use case |
---|---|---|---|---|
Falcon3-10B-Instruct-Q2_K.gguf | Q2_K | 2 | 3.92 GB | smallest, significant quality loss - not recommended for most purposes |
Falcon3-10B-Instruct-Q3_K_L.gguf | Q3_K_L | 3 | 5.45 GB | small, substantial quality loss |
Falcon3-10B-Instruct-Q3_K_M.gguf | Q3_K_M | 3 | 5.05 GB | very small, high quality loss |
Falcon3-10B-Instruct-Q3_K_S.gguf | Q3_K_S | 3 | 4.59 GB | very small, high quality loss |
Falcon3-10B-Instruct-Q4_0.gguf | Q4_0 | 4 | 5.91 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
Falcon3-10B-Instruct-Q4_K_M.gguf | Q4_K_M | 4 | 6.29 GB | medium, balanced quality - recommended |
Falcon3-10B-Instruct-Q4_K_S.gguf | Q4_K_S | 4 | 5.95 GB | small, greater quality loss |
Falcon3-10B-Instruct-Q5_0.gguf | Q5_0 | 5 | 7.14 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
Falcon3-10B-Instruct-Q5_K_M.gguf | Q5_K_M | 5 | 7.34 GB | large, very low quality loss - recommended |
Falcon3-10B-Instruct-Q5_K_S.gguf | Q5_K_S | 5 | 7.14 GB | large, low quality loss - recommended |
Falcon3-10B-Instruct-Q6_K.gguf | Q6_K | 6 | 8.46 GB | very large, extremely low quality loss |
Falcon3-10B-Instruct-Q8_0.gguf | Q8_0 | 8 | 11.0 GB | very large, extremely low quality loss - not recommended |
Falcon3-10B-Instruct-f16.gguf | f16 | 16 | 20.6 GB |
Quantized with llama.cpp b4381