Edit model card

Model Card for 4darsh-Dev/Meta-Llama-3-8B-autogptq-4bit

This repo contains 4-bit quantized (using autogptq and peft) model of Meta's Meta-Llama-3-8B

Model Details

Downloads last month
0
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.