How to use llama-3.2-11b-vision in vllm?

#85
by WaltonFuture - opened

I follow this script (https://docs.vllm.ai/en/latest/getting_started/examples/offline_inference_vision_language.html) to use llama-3.2-11b-vision in vllm, but I find that model can't stop generating correctly.

Hi! I have a problem with getting an access point to this model(
Could you please explain how you applied for the 'Accept agreement' form in model card and was it successful from the first time?
(in this branch or via mail: [email protected])

you can download llama-3.2-11b-vision from other people's HF repo

Could you please send a link to this repo? I tried to search for 'llama-3.2-11b-vision'. On HF I've seen only datasets, spaces and official repos, which where rejected. On google some link to Nvidia usage, inferless on github, the notebook you're working on and ollama (which is not the same as if downloading from there I believe)

unsloth/Llama-3.2-11B-Vision

Thank you a lot!

Sign up or log in to comment