QuantFactory Banner

QuantFactory/DeepSeek-R1-Distill-Qwen-14B-abliterated-v2-GGUF

This is quantized version of huihui-ai/DeepSeek-R1-Distill-Qwen-14B-abliterated-v2 created using llama.cpp

Original Model Card

huihui-ai/DeepSeek-R1-Distill-Qwen-14B-abliterated-v2

This is an uncensored version of deepseek-ai/DeepSeek-R1-Distill-Qwen-14B created with abliteration (see remove-refusals-with-transformers to know more about it).
This is a crude, proof-of-concept implementation to remove refusals from an LLM model without using TransformerLens.

Important Note This version is an improvement over the previous one huihui-ai/DeepSeek-R1-Distill-Qwen-14B-abliterated. This model solves this problem.

Use with ollama

You can use huihui_ai/deepseek-r1-abliterated directly

ollama run huihui_ai/deepseek-r1-abliterated:14b
Downloads last month
275
GGUF
Model size
14.8B params
Architecture
qwen2

4-bit

8-bit

Inference API
Unable to determine this model’s pipeline type. Check the docs .

Model tree for QuantFactory/DeepSeek-R1-Distill-Qwen-14B-abliterated-v2-GGUF

Quantized
(63)
this model