Error: 'System role not supported' when using gemma-2-2b with chat templates

#28
by sbhctashi - opened

Hi everyone,

I'm trying to use the gemma-2-2b model to apply a chat template, but I'm encountering the following error:

TemplateError: System role not supported

Here's a snippet of my code:

user_prompt = {"role": "user", "content": prompt}
assistant_prompt = {"role": "system", "content": "Content of the system message"}
chat_prompt = tokenizer.apply_chat_template(
    [
        assistant_prompt,
        user_prompt
    ]
)

When I call the apply_chat_template method, I get the error stating that the 'system' role is not supported.

My questions are:

Does the gemma-2-2b model support the 'system' role in chat templates?
If not, what's the recommended way to structure prompts for this model without using the 'system' role?
Has anyone else experienced this issue, and are there any workarounds?

sbhctashi changed discussion title from "Error: 'System role not supported' when using gemma-2-2b with chat templates" to Error: 'System role not supported' when using gemma-2-2b with chat templates
Google org

Hi @sbhctashi ,

I was able to reproduce the issue, and it seems you're using the gemma-2-2b-it model instead of the gemma-2-2b model. The gemma-2-2b-it model does not support the system role; however, if you use assistant instead of system, it works correctly. Please refer to the provided gist file for more details. For additional information, check out the reference mentioned.

Thank you.

@sbhctashi You can try the following two templates. I borrowed them from SimPO's repo and tanliboy's gemma-2-9b hg page. Both works fine on the 9b model.

"{{ bos_token }}{% if messages[0]['role'] == 'system' %}{% set system_message = messages[0]['content'] | trim + '\n\n' %}{% set messages = messages[1:] %}{% else %}{% set system_message = '' %}{% endif %}{% for message in messages %}{% if loop.index0 == 0 %}{% set content = system_message + message['content'] %}{% else %}{% set content = message['content'] %}{% endif %}{% if (message['role'] == 'assistant') %}{% set role = 'model' %}{% else %}{% set role = message['role'] %}{% endif %}{{ '' + role + '\n' + content | trim + '\n' }}{% endfor %}{% if add_generation_prompt %}{{'model\n'}}{% endif %}"

"{{ bos_token }}{% for message in messages %}{{ '' + message['role'] + '\n' + message['content'] | trim + '\n' }}{% endfor %}{% if add_generation_prompt %}{{'model\n'}}{% endif %}",

@sbhctashi Have you successfully SFT’d and aligned the Gemma-2-2b model? I find SFT relatively straightforward with this 2Bb model, but I'm struggling to align it using DPO. I've tried a wide range of learning rates and beta values, but no luck so far: the win rate is around 0.05, while the normal should be 0.2 to 0.3. Do you have any suggestions?

Sign up or log in to comment