Llama 3.2 3B Instruct by Meta-Llama as Llamafile
For more informations about LLama Files see: https://github.com/Mozilla-Ocho/llamafile
This model is packaged into executable weights that we call llamafiles. This gives you the easiest fastest way to use the model on Linux, MacOS, Windows, FreeBSD, OpenBSD and NetBSD systems you control on both AMD64 and ARM64.
Quickstart
wget https://huggingface.co./wirthual/Meta-Llama-3.2-3B-Instruct-llamafile/resolve/main/Meta-Llama-3.2-3B-Instruct-Q8_0.llamafile
chmod +x Meta-Llama-3.2-3B-Instruct-Q8_0.llamafile
./Meta-Llama-3.2-3B-Instruct-Q8_0.llamafile
You can then use the completion mode of the GUI to experiment with this model. You can prompt the model for completions on the command line too:
./Meta-Llama-3.2-3B-Instruct-Q8_0.llamafile -p 'four score and seven' --log-disable
Note: Change the path depending on the version of the model you want to test.
For further information, please see the llamafile README.
Having trouble? See the "Gotchas" section of the README.
Technical Details
Llama 3.2 is optimized for multilingual dialogue use cases, including agentic retrieval and summarization tasks.
Officially supports English, German, French, Italian, Portuguese, Hindi, Spanish, and Thai languages, but is trained on even more.
128K context length support
Derived from: lmstudio-community
Model creator: meta-llama
Original model: Llama-3.2-3B-Instruct
GGUF quantization: provided by bartowski based on llama.cpp
release b3821
Special thanks
🙏 Special thanks to Georgi Gerganov and the whole team working on llama.cpp for making all of this possible.
Disclaimers
wirthual is not the creator, originator, or owner of any Model. wirthual does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Model. You understand that Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Model is the sole responsibility of the person or entity who originated such Model. wirthual may not monitor or control the Models and cannot, and does not, take responsibility for any such Model. wirthual disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Models. wirthual further disclaims any warranty that the Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Models, your downloading of any Model, or use of any other Model provided by or through wirthual.
- Downloads last month
- 37
Model tree for wirthual/Meta-Llama-3.2-3B-Instruct-llamafile
Base model
meta-llama/Llama-3.2-1B-Instruct