Llama.cpp compatible version of an original 13B model.

How to run:

sudo apt-get install git-lfs
git clone https://huggingface.co./IlyaGusev/llama_13b_ru_turbo_alpaca_lora_llamacpp
cd llama_13b_ru_turbo_alpaca_lora_llamacpp && git lfs install && git lfs pull && cd ..

git clone https://github.com/ggerganov/llama.cpp
cp -R llama_13b_ru_turbo_alpaca_lora_llamacpp/* llama.cpp/models/
cd llama.cpp
make
./main -m ./models/13B/ggml-model-q4_0.bin -p "Вопрос: Почему трава зеленая? Ответ:" -n 512 --temp 0.1
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Examples
Inference API (serverless) has been turned off for this model.

Datasets used to train IlyaGusev/llama_13b_ru_turbo_alpaca_lora_llamacpp