Benchmarks
Can you add some benchmarks such as MATH-500 and AIME?
Working on it! :)
Amazing! I can't wait to see the results.
@PSM272 Update: MATH 500 eval is done!
qingy2024/QwQ-14B-v0.2-MATH500-Eval
So my version of QwQ does score a little better :D
@PSM272 Update: MATH 500 eval is done!
qingy2024/QwQ-14B-v0.2-MATH500-Eval
So my version of QwQ does score a little better :D
I was looking at the incorrect answers, and, for most of them, the LLM went into an “infinite loop” without providing an answer… Maybe the temperature was too high...
@PSM272 Update: MATH 500 eval is done!
qingy2024/QwQ-14B-v0.2-MATH500-Eval
So my version of QwQ does score a little better :D
I was looking at the incorrect answers, and, for most of them, the LLM went into an “infinite loop” without providing an answer (or the LLM reached the max output)… Maybe the temperature was too high...
Additionally, can you share your MATH-500 eval code?
@PSM272 Update: MATH 500 eval is done!
qingy2024/QwQ-14B-v0.2-MATH500-Eval
So my version of QwQ does score a little better :D
You should try a version with microsoft/phi-4...
no base model though... :/
no base model though... :/
Oh, is the UwU-14B using the base model or the instruct model?
I always use the base model for fine tuning because it is so much easier to adapt to new use cases than the instruct model, which has already learned a specific way to reply to the user. So while you can fine tune the instruct version, it will not have great performance.
P.S. UwU 14B is fine tuned from the base model Qwen2.5 14B
I always use the base model for fine tuning because it is so much easier to adapt to new use cases than the instruct model, which has already learned a specific way to reply to the user. So while you can fine tune the instruct version, it will not have great performance.
Ah, that may have been what I was doing wrong on my Qwen-14b version. I did fine-tune Phi-4 on my dataset, and the MATH score went from 80.5% to 84.8%.
Oh that's interesting!