Wish dtype convert to float16

#1
by ylhou - opened

Hi, when we using the popular serving engine vLLM run this model(bfloat16), vLLM’s GPTQ kernels only support the float16 precision currently, due to exllama kernel is tailored for float16

CodeFuse AI org

Hi, when we using the popular serving engine vLLM run this model(bfloat16), vLLM’s GPTQ kernels only support the float16 precision currently, due to exllama kernel is tailored for float16

Hi, thanks for your feedback! I'll take care of this issue。

I was trying to run the model using vllm on 8gpus. But get error as follows, is this a known issue or is there a workaround? Thanks.

ValueError: The input size is not aligned with the quantized weight shape. This can be caused by too large tensor parallel size

CodeFuse AI org
edited Apr 15

@ylhou @ganboliu Hi, some users had successfully run this model using vllm-0.3.3 with the following command:

python -m vllm.entrypoints.api_server --model $model --max-model-len 16384 --port 8000 --gpu-memory-utilization 0.9 --tensor-parallel-size 4 --quantization gptq --dtype float16

Could you try this command again?

twelveand0 changed discussion status to closed

Sign up or log in to comment