• The model was SF trained with a ~6k examples dataset consisting of Deepseek R1 and some Gemini thinker outputs. (3k sampled from Dolphin-R1)

  • Use a temperature between 0.1 - 0.2, rep_pen 1.05 - 1.1, min_p 0.02 - 0.05 for better results and explore other parameters.

  • The model uses the new Mistral template. If you have issues, remove time and date references.

{%- set default_system_message = "You are a thinking assistant. Help the user and avoid overthinking." %}

{{- '<s>' }}

{%- if messages[0]['role'] == 'system' %}
    {%- set system_message = messages[0]['content'] %}
    {%- set loop_messages = messages[1:] %}
{%- else %}
    {%- set system_message = default_system_message %}
    {%- set loop_messages = messages %}
{%- endif %}
{{- '[SYSTEM_PROMPT]' + system_message + '[/SYSTEM_PROMPT]' }}

{%- for message in loop_messages %}
    {%- if message['role'] == 'user' %}
        {{- '[INST]' + message['content'] + '[/INST]' }}
    {%- elif message['role'] == 'system' %}
        {{- '[SYSTEM_PROMPT]' + message['content'] + '[/SYSTEM_PROMPT]' }}
    {%- elif message['role'] == 'assistant' %}
        {{- message['content'] + '</s>' }}
    {%- else %}
        {{- '' }}
    {%- endif %}
{%- endfor %}
Downloads last month
0
GGUF
Model size
23.6B params
Architecture
llama

5-bit

Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and HF Inference API was unable to determine this model's library.

Dataset used to train Ba2han/Mistral-Thinker-0.1-GGUF