Can you use the same method to train the qwen2.5 32b model?
o1's reasoning is amazing.
@xldistance This just dropped! https://huggingface.co./Qwen/QwQ-32B-Preview
@xldistance This just dropped! https://huggingface.co./Qwen/QwQ-32B-Preview
I use the model for graphrag, and the QWQ 32B is nowhere near as good at global searching as the Marco-o1
I didn't know about graphRAG before, it sounds awesome!
Are you referring to search query generation?
I didn't know about graphRAG before, it sounds awesome!
Are you referring to search query generation?
Marco-o1's ability to generate entities and query entities in graphrag is much better than qwen2.5:32b and qwq:32b
Interesting, thanks for the insight
@xldistance which graphrag implementation are you using? I was also wondering if you use a specific system prompt?
@xldistance which graphrag implementation are you using? I was also wondering if you use a specific system prompt?
RTX4090,I've tweaked the graphrag prompts with gpt-o1
@xldistance which graphrag implementation are you using? I was also wondering if you use a specific system prompt?
RTX4090,I've tweaked the graphrag prompts with gpt-o1
Oh I see, but how did you deal with the