quant versions?
Congrats on the launch, awesome as usual. I know the community will provide, but would be great to have "official" quantized versions.. just saying!
Thanks again for this model. :)
A GPTQ-quantized model is publicly accessible at https://huggingface.co./shuyuej/Llama-3.3-70B-Instruct-GPTQ.
The source codes for quantization are available here.
For details, please check here.
Please take a look at our vLLM inference codes if you are interested: https://github.com/vkola-lab/PodGPT/blob/main/utils/eval_utils.py#L63-L124.
We provide the model inference codes below, based on vLLM,
Model Inference Without LoRA
from vllm import LLM, SamplingParams
# Model name and hyperparameters
model_name = "shuyuej/Llama-3.3-70B-Instruct-GPTQ"
num_gpus_vllm = 4 # The number of GPUs you wanna use
gpu_utilization_vllm = 0.95 # The GPU utilization (from 0 to 1)
max_model_len_vllm = 2048 # The maximum input token length
max_new_tokens = 1024 # The maximum number of generated tokens
# The input prompts and sampling parameters for text generation
prompts = [
"Hello, my name is",
"The president of the United States is",
"The capital of France is",
"The future of AI is",
]
sampling_params = SamplingParams(
temperature=0,
top_p=1,
max_tokens=max_new_tokens,
)
# Initialize vLLM engine
llm = LLM(
model=model_name,
tokenizer=model_name,
# While using the GPTQ quantization, the current vLLM only supports float16, as of Dec. 14th, 2024
dtype='float16',
quantization="GPTQ",
# Acknowledgement: Benjamin Kitor
# https://github.com/vllm-project/vllm/issues/2794
# Reference: https://github.com/vllm-project/vllm/issues/1908
distributed_executor_backend="mp",
tensor_parallel_size=num_gpus_vllm,
gpu_memory_utilization=gpu_utilization_vllm,
# Note: We add this only to save the GPU Memories!
max_model_len=max_model_len_vllm,
disable_custom_all_reduce=True,
enable_lora=False,
)
# Generate responses using the vLLM LLM
completions = llm.generate(prompts, sampling_params)
for output in completions:
prompt = output.prompt
generated_text = output.outputs[0].text
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")
Model Inference With LoRA
We have also shared our trained LoRA adapter here. Please download it manually if needed.
from vllm import LLM, SamplingParams
# Model name and hyperparameters
model_name = "shuyuej/Llama-3.3-70B-Instruct-GPTQ"
lora_path = "checkpoint-18640" # The path to your LoRA adapter
num_gpus_vllm = 4 # The number of GPUs you wanna use
gpu_utilization_vllm = 0.95 # The GPU utilization (from 0 to 1)
max_model_len_vllm = 2048 # The maximum input token length
max_new_tokens = 1024 # The maximum number of generated tokens
# The input prompts and sampling parameters for text generation
prompts = [
"Hello, my name is",
"The president of the United States is",
"The capital of France is",
"The future of AI is",
]
sampling_params = SamplingParams(
temperature=0,
top_p=1,
max_tokens=max_new_tokens,
)
# Initialize vLLM engine
llm = LLM(
model=model_name,
tokenizer=model_name,
# While using the GPTQ quantization, the current vLLM only supports float16, as of Dec. 14th, 2024
dtype='float16',
quantization="GPTQ",
# Acknowledgement: Benjamin Kitor
# https://github.com/vllm-project/vllm/issues/2794
# Reference: https://github.com/vllm-project/vllm/issues/1908
distributed_executor_backend="mp",
tensor_parallel_size=num_gpus_vllm,
gpu_memory_utilization=gpu_utilization_vllm,
# Note: We add this only to save the GPU Memories!
max_model_len=max_model_len_vllm,
disable_custom_all_reduce=True,
enable_lora=True,
)
# Generate responses using the vLLM LLM
completions = llm.generate(prompts, sampling_params, lora_request=LoRARequest("adapter", 1, lora_path))
for output in completions:
prompt = output.prompt
generated_text = output.outputs[0].text
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")
Also, we provide the demo codes of real-world deployment here.
🔥 Real-world deployment
For real-world deployment, please refer to the vLLM Distributed Inference and Serving and OpenAI Compatible Server.
vLLM can be deployed as a server that implements the OpenAI API protocol. This allows vLLM to be used as a drop-in replacement for applications using OpenAI API. By default, it starts the server at http://localhost:8000
.
vllm serve shuyuej/Llama-3.3-70B-Instruct-GPTQ \
--quantization gptq \
--trust-remote-code \
--dtype float16 \
--max-model-len 4096 \
--distributed-executor-backend mp \
--pipeline-parallel-size 4 \
--api-key token-abc123
Please check here if you wanna change Engine Arguments
.
If you want to train a LoRA adapter, we provide the full training codes here: https://github.com/vkola-lab/PodGPT?tab=readme-ov-file#-train-quantized-large-models.
If you already have a LoRA adapter and wanna deploy it, please refer to the vLLM documentation for a detailed guide.
It provides step-by-step instructions on how to serve LoRA adapters effectively in a vLLM environment.
We have also shared our trained LoRA adapter here. Please download it manually if needed.
git clone https://huggingface.co./shuyuej/Public-Shared-LoRA-for-Llama-3.3-70B-Instruct-GPTQ
Then, use the vLLM to serve the base model with the LoRA adapter by including the --enable-lora
flag and specifying --lora-modules
:
vllm serve shuyuej/Llama-3.3-70B-Instruct-GPTQ \
--quantization gptq \
--trust-remote-code \
--dtype float16 \
--max-model-len 4096 \
--distributed-executor-backend mp \
--pipeline-parallel-size 4 \
--api-key token-abc123 \
--enable-lora \
--lora-modules adapter=Public-Shared-LoRA-for-Llama-3.3-70B-Instruct-GPTQ/checkpoint-18640
Since this server is compatible with OpenAI API, you can use it as a drop-in replacement for any applications using OpenAI API.
For example, another way to query the server is via the openai python package:
#!/usr/bin/env python
# coding=utf-8
import time
import asyncio
from openai import AsyncOpenAI
# Our system prompt
# Our system prompt
SYSTEM_PROMPT = (
"I am PodGPT, a large language model developed by the Kolachalama Lab in Boston, "
"specializing in science, technology, engineering, mathematics, and medicine "
"(STEMM)-related research and education, powered by podcast audio.\n"
"I provide information based on established scientific knowledge but must not offer "
"personal medical advice or present myself as a licensed medical professional.\n"
"I will maintain a consistently professional and informative tone, avoiding humor, "
"sarcasm, and pop culture references.\n"
"I will prioritize factual accuracy and clarity while ensuring my responses are "
"educational and non-harmful, adhering to the principle of 'do no harm'.\n"
"My responses are for informational purposes only and should not be considered a "
"substitute for professional consultation."
)
# Initialize the AsyncOpenAI client
client = AsyncOpenAI(
base_url="http://localhost:8000/v1",
api_key="token-abc123",
)
async def main(message):
"""
Streaming responses with async usage and "await" with each API call:
Reference: https://github.com/openai/openai-python?tab=readme-ov-file#streaming-responses
:param message: The user query
"""
start_time = time.time()
stream = await client.chat.completions.create(
model="shuyuej/Llama-3.3-70B-Instruct-GPTQ",
messages=[
{
"role": "system",
"content": SYSTEM_PROMPT,
},
{
"role": "user",
"content": message,
}
],
max_tokens=2048,
temperature=0.2,
top_p=1,
stream=True,
extra_body={
"ignore_eos": False,
# https://huggingface.co./shuyuej/Llama-3.3-70B-Instruct-GPTQ/blob/main/config.json#L10-L14
"stop_token_ids": [128001, 128008, 128009],
},
)
print(f"The user's query is\n {message}\n ")
print("The model's response is\n")
async for chunk in stream:
print(chunk.choices[0].delta.content or "", end="")
print(f"\nInference time: {time.time() - start_time:.2f} seconds\n")
print("=" * 100)
if __name__ == "__main__":
# Some random user queries
prompts = [
"Hello, my name is",
"The president of the United States is",
"The capital of France is",
"The future of AI is",
"Can you tell me more about Bruce Lee?",
"What are the differences between DNA and RNA?",
"What is dementia and Alzheimer's disease?",
"Tell me the differences between Alzheimer's disease and dementia"
]
# Conduct model inference
for message in prompts:
asyncio.run(main(message=message))
print("\n\n")
Merry Christmas and Happy New Year!
Best regards,
Shuyue
Dec. 21st, 2024