No description provided.
Owner

I don't think VLLM can inference those binaries, gguf is the ggml/llama.cpp format

This is for vision LLMs, not the vllm library, we'll change the wording to be clearer

cmp-nct changed pull request status to merged

Sign up or log in to comment