LM Studio vs llama.cpp different results?
I tested the QwQ-32B-Q3_K_M.gguf in clients that i mentioned. LM Studio just got updated, so i just run the model without making any fixes (temp 0.6 of course, rep.p 1). And it seems that in LM Studio the model performs better?
I asked same question SEVERAL times in LM Studio and llama.cpp (llama cli in command line), and in llama.cpp the model is thinking too much and "goes in the wrong direction" so to say, so the final answer is not the best. While in LM Studio it does everything right.
Maybe someone can provide correct settings for llama.cpp? Mine are:
llama-cli -m QwQ-32B-Q3_K_M.gguf -c 7000 -ngl 65 --multiline-input --conversation --color --temp 0.6 --no-mmap --mlock
So basically default. What is missing?
Could you try using our suggested settings in llama.cpp:
./llama.cpp/llama-cli \
--model unsloth-QwQ-32B-GGUF/QwQ-32B-Q4_K_M.gguf \
--threads 32 \
--ctx-size 16384 \
--n-gpu-layers 99 \
--seed 3407 \
--prio 2 \
--temp 0.6 \
--repeat-penalty 1.1 \
--dry-multiplier 0.5 \
--min-p 0.01 \
--top-k 40 \
--top-p 0.95 \
-no-cnv \
--samplers "top_k;top_p;min_p;temperature;dry;typ_p;xtc" \
--prompt "<|im_start|>user\nCreate a Flappy Bird game in Python."
See our guide here for more info btw: https://docs.unsloth.ai/basics/tutorial-how-to-run-qwq-32b-effectively
I know of course... And i tried those settings too. But for now, i feel like leaving everything at default in LM Studio (only using temp 0.6 and rep.p 1) works perfectly...
Can you believe, i'm using your QwQ-32B-Q3_K_M.gguf quant, and it seems to work as if it was Q5_km or something...
I gave it super hard promt and it worked! >>>
"Create Mandelbrot fractal simulator with Earth like colors and rendered on a 3D sphere. Use html, css, javascript, single file. Allow movements using mouse dragging and infinite zoom using mouse wheel."
With my super slow hardware it thought almost for an HOUR (10000 tokens), but gave me working code.
And this is not a single example, i tried different hard promts, and it seems super intelligent, ONLY Q3_KM!
I noticed that this model performs better in LM Studio than in llama.cpp. Then i opened model settings in lm studio where you can see jinja template. Compared it to what is see in llama.cpp when launching server. And from what i see, they are slightly different, or maybe it's typos?
LM Studio template:
{%- if tools %}
{{- '<|im_start|>system\n' }}
{%- if messages[0]['role'] == 'system' %}
{{- messages[0]['content'] }}
{%- else %}
{{- '' }}
{%- endif %}
{{- "\n\n# Tools\n\nYou may call one or more functions to assist with the user query.\n\nYou are provided with function signatures within <tools></tools> XML tags:\n<tools>" }}
{%- for tool in tools %}
{{- "\n" }}
{{- tool | tojson }}
{%- endfor %}
{{- "\n</tools>\n\nFor each function call, return a json object with function name and arguments within <tool_call></tool_call> XML tags:\n<tool_call>\n{\"name\": <function-name>, \"arguments\": <args-json-object>}\n</tool_call><|im_end|>\n" }}
{%- else %}
{%- if messages[0]['role'] == 'system' %}
{{- '<|im_start|>system\n' + messages[0]['content'] + '<|im_end|>\n' }}
{%- endif %}
{%- endif %}
{%- for message in messages %}
{%- if (message.role == "user") or (message.role == "system" and not loop.first) %}
{{- '<|im_start|>' + message.role + '\n' + message.content + '<|im_end|>' + '\n' }}
{%- elif message.role == "assistant" and not message.tool_calls %}
{%- set content = (message.content.split('</think>')|last).lstrip('\n') %}
{{- '<|im_start|>' + message.role + '\n' + content + '<|im_end|>' + '\n' }}
{%- elif message.role == "assistant" %}
{%- set content = (message.content.split('</think>')|last).lstrip('\n') %}
{{- '<|im_start|>' + message.role }}
{%- if message.content %}
{{- '\n' + content }}
{%- endif %}
{%- for tool_call in message.tool_calls %}
{%- if tool_call.function is defined %}
{%- set tool_call = tool_call.function %}
{%- endif %}
{{- '\n<tool_call>\n{"name": "' }}
{{- tool_call.name }}
{{- '", "arguments": ' }}
{{- tool_call.arguments | tojson }}
{{- '}\n</tool_call>' }}
{%- endfor %}
{{- '<|im_end|>\n' }}
{%- elif message.role == "tool" %}
{%- if (loop.index0 == 0) or (messages[loop.index0 - 1].role != "tool") %}
{{- '<|im_start|>user' }}
{%- endif %}
{{- '\n<tool_response>\n' }}
{{- message.content }}
{{- '\n</tool_response>' }}
{%- if loop.last or (messages[loop.index0 + 1].role != "tool") %}
{{- '<|im_end|>\n' }}
{%- endif %}
{%- endif %}
{%- endfor %}
{%- if add_generation_prompt %}
{{- '<|im_start|>assistant\n' }}
{%- endif %}
llama.ccp
{%- if tools %}
{{- '<|im_start|>system\n' }}
{%- if messages[0]['role'] == 'system' %}
{{- messages[0]['content'] }}
{%- else %}
{{- '' }}
{%- endif %}
{{- "\n\n# Tools\n\nYou may call one or more functions to assist with the user query.\n\nYou are provided with function signatures within <tools></tools> XML tags:\n<tools>" }}
{%- for tool in tools %}
{{- "\n" }}
{{- tool | tojson }}
{%- endfor %}
{{- "\n</tools>\n\nFor each function call, return a json object with function name and arguments within <tool_call></tool_call> XML tags:\n<tool_call>\n{\"name\": <function-name>, \"arguments\": <args-json-object>}\n</tool_call><|im_end|>\n" }}
{%- else %}
{%- if messages[0]['role'] == 'system' %}
{{- '<|im_start|>system\n' + messages[0]['content'] + '<|im_end|>\n' }}
{%- endif %}
{%- endif %}
{%- for message in messages %}
{%- if (message.role == "user") or (message.role == "system" and not loop.first) %}
{{- '<|im_start|>' + message.role + '\n' + message.content + '<|im_end|>' + '\n' }}
{%- elif message.role == "assistant" and not message.tool_calls %}
{%- set content = message.content.split('</think>')[-1].lstrip('\n') %}
{{- '<|im_start|>' + message.role + '\n' + content + '<|im_end|>' + '\n' }}
{%- elif message.role == "assistant" %}
{%- set content = message.content.split('</think>')[-1].lstrip('\n') %}
{{- '<|im_start|>' + message.role }}
{%- if message.content %}
{{- '\n' + content }}
{%- endif %}
{%- for tool_call in message.tool_calls %}
{%- if tool_call.function is defined %}
{%- set tool_call = tool_call.function %}
{%- endif %}
{{- '\n<tool_call>\n{"name": "' }}
{{- tool_call.name }}
{{- '", "arguments": ' }}
{{- tool_call.arguments | tojson }}
{{- '}\n</tool_call>' }}
{%- endfor %}
{{- '<|im_end|>\n' }}
{%- elif message.role == "tool" %}
{%- if (loop.index0 == 0) or (messages[loop.index0 - 1].role != "tool") %}
{{- '<|im_start|>user' }}
{%- endif %}
{{- '\n<tool_response>\n' }}
{{- message.content }}
{{- '\n</tool_response>' }}
{%- if loop.last or (messages[loop.index0 + 1].role != "tool") %}
{{- '<|im_end|>\n' }}
{%- endif %}
{%- endif %}
{%- endfor %}
{%- if add_generation_prompt %}
{{- '<|im_start|>assistant\n<think>\n' }}
{%- endif %}
Differences:
{%- set content = message.content.split('</think>')[-1].lstrip('\n') %} # Not same.
{{- '<|im_start|>assistant\n< THINK >\n' }} instead of {{- '<|im_start|>assistant\n' }} #Also
So what is going on?