danielhanchen commited on
Commit
6b9a45d
·
verified ·
1 Parent(s): 000972f

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +10 -0
README.md CHANGED
@@ -46,6 +46,16 @@ tags:
46
 
47
  So, **1 + 1 = 2**. [end of text]
48
  ```
 
 
 
 
 
 
 
 
 
 
49
 
50
  # Finetune Llama 3.3, Gemma 2, Mistral 2-5x faster with 70% less memory via Unsloth!
51
  We have a free Google Colab Tesla T4 notebook for Llama 3.1 (8B) here: https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Llama3.1_(8B)-Alpaca.ipynb
 
46
 
47
  So, **1 + 1 = 2**. [end of text]
48
  ```
49
+ 5. If you have a GPU (RTX 4090 for example) with 24GB, you can offload 5 layers to the GPU for faster processing. If you have multiple GPUs, you can probably offload more layers.
50
+ ```bash
51
+ ./llama.cpp/llama-cli
52
+ --model unsloth/DeepSeek-V3-GGUF/DeepSeek-V3-Q2_K_XS/DeepSeek-V3-Q2_K_XS-00001-of-00005.gguf
53
+ --cache-type-k q5_0
54
+ --threads 16
55
+ --prompt '<|User|>What is 1+1?<|Assistant|>'
56
+ --n-gpu-layers 5
57
+ ```
58
+ 6. Use `q4_0` KV cache for even faster workloads at the expense of some degraded accuracy.
59
 
60
  # Finetune Llama 3.3, Gemma 2, Mistral 2-5x faster with 70% less memory via Unsloth!
61
  We have a free Google Colab Tesla T4 notebook for Llama 3.1 (8B) here: https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Llama3.1_(8B)-Alpaca.ipynb