LLaMA model finetuned using LoRA (1 epoch) on the Stanford Alpaca training data set and quantized to 4bit.

Because this model contains the merged LLaMA weights it is subject to their license restrictions.

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference API
Unable to determine this model's library. Check the docs .

Dataset used to train nealchandra/alpaca-13b-hf-int4