Text2Text Generation
Transformers
Safetensors
English
mistral
text-generation
text-generation-inference
Inference Endpoints

Tokenizer chat template doesn't accept system prompt

#3
by gabrielmbmb HF staff - opened

Hi! I was trying to use the model with a system prompt, but the current chat_template in the tokenizer_config.json doesn't allow it and raise an exception because only user and assistant roles are accepted. I think this unexpected, as in the example you use fastchat.conversation.get_conv_template and a system prompt is used, so I think this is just a bug with the chat_template in tokenizer_config.json.

from transformers import AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained("prometheus-eval/prometheus-7b-v2.0")

tokenizer.apply_chat_template([
    {"role": "system", "content": "..."},
    {"role": "user", "content": "..."},
])

As far as I know, as this model uses Mistral, the fix should be as easy as copying the chat_te[mplate from https://huggingface.co./mistralai/Mistral-7B-Instruct-v0.1, but is there a reason you recommend using fastchat.conversation.get_conv_template instead of the transformers tokenizers?

prometheus-eval org

Thanks for pointing this out! I just merged the PR

scottsuk0306 changed discussion status to closed

Sign up or log in to comment