roleplaiapp/DeepSeek-R1-Distill-Llama-70B-Q8_0-GGUF

Repo: roleplaiapp/DeepSeek-R1-Distill-Llama-70B-Q8_0-GGUF
Original Model: DeepSeek-R1-Distill-Llama-70B Organization: deepseek-ai Quantized File: deepseek-r1-distill-llama-70b-q8_0.gguf Quantization: GGUF Quantization Method: Q8_0
Use Imatrix: False
Split Model: True

Overview

This is an GGUF Q8_0 quantized version of DeepSeek-R1-Distill-Llama-70B.

Quantization By

I often have idle A100 GPUs while building/testing and training the RP app, so I put them to use quantizing models. I hope the community finds these quantizations useful.

Andrew Webby @ RolePlai

Downloads last month
149
GGUF
Model size
70.6B params
Architecture
llama

8-bit

Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and the model is not deployed on the HF Inference API.

Model tree for roleplaiapp/DeepSeek-R1-Distill-Llama-70B-Q8_0-GGUF

Quantized
(38)
this model