GGUF format?
Will this model be available in GGUF format?
I tried to convert it myself with ggml-org/gguf-my-repo and got the following: ERROR:hf-to-gguf:Model Idefics3ForConditionalGeneration is not supported
.
Also, it's not clear whether llama.cpp's convert_hf_to_gguf.py
supports the Idefics3 architecture (source: Bug: Quantizing HuggingFaceM4/Idefics3-8B-Llama3 fails with error #8902
)
Hi! currently idefics3 is not supported by llama-cpp, so the conversion script will not work. We will work on this if there is enough interest from the community, so do react to this message if you're interested!
There is a lot of interest in community and PR to add support seems to be stuck :( I'm trying to work my way through the errors there
https://github.com/ggml-org/llama.cpp/pull/11292
@andito