Question about Gemma2:2b system prompt template and usage in Langflow
Hello Hugging Face community! 👋
I'm currently experimenting with Gemma2:2b and was wondering if anyone could help clarify something. Specifically, what kind of system prompt template does Gemma2:2b use? For example, how do you assign a specific role to the model, provide instructions, and set restrictions on what it should or shouldn't do?
Additionally, has anyone tried using Gemma2:2b within Langflow?
Thanks in advance for any insights! 😊
Hi
@MarcusWey
, Gemma2:2b doesn’t inherently support "system" instructions. Instead, roles like "user" and "model" are explicitly defined using specific tokens, and the model follows these directives accordingly. This method allows you to give the model instructions, define its role in a conversation, and guide how it processes the input.
However, when working with the instruction-tuned ("it") version (like gemma-2-2b-it), specific formatting is required to assign roles, give instructions, and set rules for the model's behavior. In this version, control tokens such as , , user, and model are used to structure the conversation, making it more suited for dialogue or interactive tasks.
Here’s a sample code using the Gemma2:2b-it model with a system prompt template.
Kindly find above attached screenshots. if you have any concerns let me know. Thank you.