It's not working with vLLM

#7
by BelalElhossany - opened

I'm trying to run it with vLLM, it gives the error:
RuntimeError: Failed to load the model config. If the model is a custom model not yet available in the HuggingFace transformers library, consider setting trust_remote_code=True in LLM or using the --trust-remote-code flag in the CLI.

Sign up or log in to comment