`config.eos_token_id`'s being a list causes problem

#149
by sadra-barikbin - opened

Hi there!

Model config's eos_token_id is of type list but is supposed to be an int according to transformers's configuration_utils.py::PreTrainedConfig.

https://huggingface.co./meta-llama/Llama-3.1-8B-Instruct/blob/0e9e39f249a16976918f6564b8830bc894c89659/config.json#L8-L12

This causes problem during serving the model with TGI:

File "/opt/conda/lib/python3.11/site-packages/transformers/tokenization_utils_base.py", line 3347, in batch_encode_plus
    padding_strategy, truncation_strategy, max_length, kwargs = self._get_padding_truncation_strategies(
  File "/opt/conda/lib/python3.11/site-packages/transformers/tokenization_utils_base.py", line 2976, in _get_padding_truncation_strategies
    if padding_strategy != PaddingStrategy.DO_NOT_PAD and (self.pad_token is None or self.pad_token_id < 0):
TypeError: '<' not supported between instances of 'NoneType' and 'int'
2024-10-10T15:39:46.502649Z ERROR warmup{max_input_length=4095 max_prefill_tokens=4145 max_total_tokens=4096 max_batch_size=None}:warmup: text_generation_router_v3::client: backends/v3/src/client/mod.rs:54: Server error: '<' not supported between instances of 'NoneType' and 'int'
Error: Backend(Warmup(Generation("'<' not supported between instances of 'NoneType' and 'int'")))
2024-10-10T15:39:46.557801Z ERROR text_generation_launcher: Webserver Crashed
2024-10-10T15:39:46.557834Z  INFO text_generation_launcher: Shutting down shards

What causes the problem in TGI is that if model's tokenizer does not have a padding_token, TGI attempts to set it using eos_token:

https://github.com/huggingface/text-generation-inference/blob/0c478846c5002a4053b0349d6557bafb9cedc935/server/text_generation_server/models/causal_lm.py#L547-L551

By setting tokenizer.pad_token_id, tokenizer's _pad_token becomes ['<|end_of_text|>', '<|eom_id|>', '<|eot_id|>'].

https://github.com/huggingface/transformers/blob/617b21273a349bd3a94e2b3bfb83f8089f45749b/src/transformers/tokenization_utils_base.py#L1295-L1297

When tokenizer(., padding=True) is called. tokenizer.pad_token and tokenizer.pad_token_id are accessed. As per the code below, tokenizer.pad_token becomes "['<|end_of_text|>', '<|eom_id|>', '<|eot_id|>']" and tokenizer.pad_token_id becomes None which causes the error above.

https://github.com/huggingface/transformers/blob/617b21273a349bd3a94e2b3bfb83f8089f45749b/src/transformers/tokenization_utils_base.py#L1101-L1110

https://github.com/huggingface/transformers/blob/617b21273a349bd3a94e2b3bfb83f8089f45749b/src/transformers/tokenization_utils_base.py#L1233-L1240

This problem also exists for meta-llama/Llama-3.2-3B-Instruct and meta-llama/Llama-3.2-1B-Instruct.

@joaogante

See this related issue: https://github.com/huggingface/text-generation-inference/issues/1781

(TL;DR: do you have a recent version of TGI? Have you tried updating? The two stop tokens are part of the design of llama 3.1)

Thank you for your response.

It turned out that I was running TGI over CPU🤦🏻. In that case, TGI goes through the path above and raises error (is it a bug for TGI then?) but running on GPU it loads another variant of the model i.e. the flash one which in its code they've handled this matter:

https://github.com/huggingface/text-generation-inference/blob/3ea82d008c8f26157e8d0b568b885536efb6a7b0/server/text_generation_server/models/flash_causal_lm.py#L970-L972

Sign up or log in to comment