GGUF LoRA adapters Adapters extracted from fine tuned models, using mergekit-extract-lora ggml-org/LoRA-Llama-3-Instruct-abliteration-8B-F16-GGUF Updated Nov 1, 2024 • 29 • 1 ggml-org/LoRA-Qwen2.5-1.5B-Instruct-abliterated-F16-GGUF Updated 3 days ago • 14 • 1 ggml-org/LoRA-Qwen2.5-3B-Instruct-abliterated-F16-GGUF Updated 17 days ago • 30 ggml-org/LoRA-Qwen2.5-7B-Instruct-abliterated-v3-F16-GGUF Updated 19 days ago • 77 • 2
llama.vim Recommended models for the llama.vim plugin ggml-org/Qwen2.5-Coder-1.5B-Q8_0-GGUF Text Generation • Updated Oct 28, 2024 • 854 • 5 ggml-org/Qwen2.5-Coder-3B-Q8_0-GGUF Text Generation • Updated Nov 26, 2024 • 889 • 3 ggml-org/Qwen2.5-Coder-7B-Q8_0-GGUF Text Generation • Updated Oct 28, 2024 • 1.38k • 1 ggml-org/Qwen2.5-Coder-14B-Q8_0-GGUF Text Generation • Updated Nov 18, 2024 • 192
ggml-org/LoRA-Deepthink-Reasoning-Qwen2.5-7B-Instruct-Q8_0-GGUF Text Generation • Updated 11 days ago • 16