--- license: apache-2.0 datasets: - HuggingFaceTB/cosmopedia language: - en inference: parameters: temperature: 0.6 top_p: 0.95 top_k: 50 repetition_penalty: 1.2 widget: - text: Photosynthesis is example_title: Textbook group: Completion - text: ' [INST] How to take care of plants? [/INST] ' example_title: Wikihow group: Completion - text: ' [INST] Generate a story about a flying cat [/INST] ' example_title: Story group: Completion base_model: HuggingFaceTB/cosmo-1b tags: - llama-cpp - gguf-my-repo --- # AIronMind/cosmo-1b-Q4_K_M-GGUF This model was converted to GGUF format from [`HuggingFaceTB/cosmo-1b`](https://huggingface.co./HuggingFaceTB/cosmo-1b) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co./spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co./HuggingFaceTB/cosmo-1b) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo AIronMind/cosmo-1b-Q4_K_M-GGUF --hf-file cosmo-1b-q4_k_m.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo AIronMind/cosmo-1b-Q4_K_M-GGUF --hf-file cosmo-1b-q4_k_m.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo AIronMind/cosmo-1b-Q4_K_M-GGUF --hf-file cosmo-1b-q4_k_m.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo AIronMind/cosmo-1b-Q4_K_M-GGUF --hf-file cosmo-1b-q4_k_m.gguf -c 2048 ```