Update README.md
Browse files
README.md
CHANGED
@@ -1,3 +1,13 @@
|
|
1 |
---
|
2 |
license: apache-2.0
|
3 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
---
|
2 |
license: apache-2.0
|
3 |
---
|
4 |
+
Origin: https://huggingface.co/NousResearch/Nous-Hermes-2-Vision-Alpha
|
5 |
+
This is the quantized GGUF version of a function calling fine tuned llava-type model using a tiny Vision tower.
|
6 |
+
|
7 |
+
Sharing it because it's novel and it has beene a pain to convert
|
8 |
+
\build\bin\Release\llava-cli.exe -m Q:\models\llava\Nous-Hermes-2-Vision\ggml-model-q5_k --mmproj Q:\models\llava\Nous-Hermes-2-Vision\mmproj-model-f16.gguf -ngl 80 -p 1025 --image path/to/image -p "Describe the image (use the proper syntax)"
|
9 |
+
|
10 |
+
|
11 |
+
If you wish to quantize yourself you currently need this PR: https://github.com/ggerganov/llama.cpp/pull/4313
|
12 |
+
|
13 |
+
Warning: The model is not very good at this point - mostly for testing purposes
|