Strange behaviour of Llama3.2-vision - it behaves like text model
1
#9 opened about 2 months ago
by
jirkazcech
How to use it in ollama
1
#8 opened about 2 months ago
by
vejahetobeu
![](https://cdn-avatars.huggingface.co/v1/production/uploads/no-auth/z74ux0cdpczD5984ujr8e.png)
Exporting to GGUF
5
#7 opened 2 months ago
by
krasivayakoshka
![](https://cdn-avatars.huggingface.co/v1/production/uploads/6616cd977a67ace5750ac7d8/SCHXuL6CIvpNXxKJk2Edp.png)
Training with images
4
#6 opened 3 months ago
by
Khawn2u
![](https://cdn-avatars.huggingface.co/v1/production/uploads/6340da209dbfe0d48b313017/cR8nCiohpLN0NXt_78lAD.jpeg)
AttributeError: Model MllamaForConditionalGeneration does not support BitsAndBytes quantization yet.
1
#5 opened 3 months ago
by
luizhsalazar
How much vram needed?
3
#4 opened 4 months ago
by
Dizzl500
How load this model?
3
#3 opened 4 months ago
by
benTow07
Can you post the script that was used to quantize this model please?
10
#2 opened 5 months ago
by
ctranslate2-4you