`config.eos_token_id`'s being a list causes problem
Hi there!
Model config's eos_token_id
is of type list but is supposed to be an int according to transformers's configuration_utils.py::PreTrainedConfig.
This causes problem during serving the model with TGI:
File "/opt/conda/lib/python3.11/site-packages/transformers/tokenization_utils_base.py", line 3347, in batch_encode_plus
padding_strategy, truncation_strategy, max_length, kwargs = self._get_padding_truncation_strategies(
File "/opt/conda/lib/python3.11/site-packages/transformers/tokenization_utils_base.py", line 2976, in _get_padding_truncation_strategies
if padding_strategy != PaddingStrategy.DO_NOT_PAD and (self.pad_token is None or self.pad_token_id < 0):
TypeError: '<' not supported between instances of 'NoneType' and 'int'
2024-10-10T15:39:46.502649Z ERROR warmup{max_input_length=4095 max_prefill_tokens=4145 max_total_tokens=4096 max_batch_size=None}:warmup: text_generation_router_v3::client: backends/v3/src/client/mod.rs:54: Server error: '<' not supported between instances of 'NoneType' and 'int'
Error: Backend(Warmup(Generation("'<' not supported between instances of 'NoneType' and 'int'")))
2024-10-10T15:39:46.557801Z ERROR text_generation_launcher: Webserver Crashed
2024-10-10T15:39:46.557834Z INFO text_generation_launcher: Shutting down shards
What causes the problem in TGI is that if model's tokenizer does not have a padding_token, TGI attempts to set it using eos_token:
By setting tokenizer.pad_token_id
, tokenizer's _pad_token
becomes ['<|end_of_text|>', '<|eom_id|>', '<|eot_id|>']
.
When tokenizer(., padding=True)
is called. tokenizer.pad_token
and tokenizer.pad_token_id
are accessed. As per the code below, tokenizer.pad_token
becomes "['<|end_of_text|>', '<|eom_id|>', '<|eot_id|>']"
and tokenizer.pad_token_id
becomes None
which causes the error above.
This problem also exists for meta-llama/Llama-3.2-3B-Instruct
and meta-llama/Llama-3.2-1B-Instruct
.
See this related issue: https://github.com/huggingface/text-generation-inference/issues/1781
(TL;DR: do you have a recent version of TGI? Have you tried updating? The two stop tokens are part of the design of llama 3.1)
Thank you for your response.
It turned out that I was running TGI over CPU🤦🏻. In that case, TGI goes through the path above and raises error (is it a bug for TGI then?) but running on GPU it loads another variant of the model i.e. the flash one which in its code they've handled this matter: