Request for Q4 Quantized Version of the Current Model
Dear author,
I'm really interested in using the current model in a more memory - efficient way. Could you please consider providing the Q4 quantized version of the model? This would be extremely helpful for those of us with limited computational resources.
Thank you!
Best regards
@nowanti I'm member of team mradermacher. We are providing static and wighted/imatrix quants for all our models and any tens of thousands of popular models under the meradermacher account.
Download page: https://hf.tst.eu/model#DeepSeek-R1-Distill-Qwen-14B-Uncensored-GGUF
static quants: https://huggingface.co./mradermacher/DeepSeek-R1-Distill-Qwen-14B-Uncensored-GGUF
imatirx/wighted quants: https://huggingface.co./mradermacher/DeepSeek-R1-Distill-Qwen-14B-Uncensored-i1-GGUF
If you are ever looking for quants for a specific model don't hesitate to ask us under https://huggingface.co./mradermacher/model_requests/discussions