Will quantised version be available?
Thanks for sharing but what are the recommended ways to quantise this model?
Or will quantised model be made available so that it is not as resource-intensive to do inference?
Thanks
Did you see https://huggingface.co./models?other=base_model:quantized:nvidia/Llama-3.1-Nemotron-70B-Instruct-HF?
Use the model tree section on model pages to see what quantizations are available.
NVIDIA hasn't released any quantized version yet. But there are several community quantization efforts mentioned above.
we also provide quantized 4-1.5 bits version https://github.com/microsoft/VPTQ at here https://huggingface.co./collections/VPTQ-community/vptq-llama-31-nemotron-70b-instruct-hf-without-finetune-671730b96f16208d0b3fe942 . Feel free give us feedback!
Runs on 1x H100 / A100 (80GB) : https://huggingface.co./mysticbeing/Llama-3.1-Nemotron-70B-Instruct-HF-FP8-DYNAMIC