Add VLLM tag
#6
by
osanseviero
- opened
README.md
CHANGED
@@ -1,5 +1,6 @@
|
|
1 |
---
|
2 |
license: apache-2.0
|
|
|
3 |
---
|
4 |
Update: PR is merged, llama.cpp now natively supports these models
|
5 |
Important: Verify that processing a simple question with any image at least uses 1200 tokens of prompt processing, that shows that the new PR is in use.
|
@@ -11,5 +12,4 @@ If your prompt is just 576 + a few tokens, you are using llava-1.5 code (or proj
|
|
11 |
The mmproj files are the embedded ViT's that came with llava-1.6, I've not compared them but given the previous releases from the team I'd be surprised if the ViT has not been fine tuned this time.
|
12 |
If that's the case, using another ViT can cause issues.
|
13 |
|
14 |
-
Original models: https://github.com/haotian-liu/LLaVA
|
15 |
-
|
|
|
1 |
---
|
2 |
license: apache-2.0
|
3 |
+
pipeline_tag: image-text-to-text
|
4 |
---
|
5 |
Update: PR is merged, llama.cpp now natively supports these models
|
6 |
Important: Verify that processing a simple question with any image at least uses 1200 tokens of prompt processing, that shows that the new PR is in use.
|
|
|
12 |
The mmproj files are the embedded ViT's that came with llava-1.6, I've not compared them but given the previous releases from the team I'd be surprised if the ViT has not been fine tuned this time.
|
13 |
If that's the case, using another ViT can cause issues.
|
14 |
|
15 |
+
Original models: https://github.com/haotian-liu/LLaVA
|
|