--- library_name: transformers license: apache-2.0 quantized_by: stillerman tags: - llamafile - gguf language: - en datasets: - HuggingFaceTB/smollm-corpus --- # SmolLM-1.7B-Instruct - llamafile This repo contains `.gguf` and `.llamafile` files for [SmolLM-1.7B-Instruct](https://huggingface.co./collections/HuggingFaceTB/smollm-6695016cad7167254ce15966). [Llamafiles](https://llamafile.ai/) are single-file executables (called a "llamafile") that run locally on most computers, with no installation. # Use it in 3 lines! ``` wget https://huggingface.co./stillerman/SmolLM-1.7B-Instruct-Llamafile/resolve/main/SmolLM-1.7B-Instruct-F16.llamafile chmod a+x SmolLM-1.7B-Instruct-F16.llamafile ./SmolLM-1.7B-Instruct-F16.llamafile ``` # Thank you to - Huggingface for [SmolLM model family](https://huggingface.co./collections/HuggingFaceTB/smollm-6695016cad7167254ce15966) - Mozilla for [Llamafile](https://llamafile.ai/) - [llama.cpp](https://github.com/ggerganov/llama.cpp/) - [Justine Tunney](https://huggingface.co./jartine) and [Compilade](https://github.com/compilade) for help