Brilliant

#1
by lewismac - opened

This model has passed some tests with RAG that both the default 2.5coder and qwq failed on their own - THANK YOU

The response to coding should be a bit better, and the response to RAG might be a bit worse.

Interestingly, for both 2.5-coder and QwQ on their own, I was getting a poor response with RAG using AnythingLLM for a specific prompt. This 9010 model returns the correct result πŸ€·β€β™‚οΈ!

Thanks for your test and positive feedback.

I test the 9:1, 8:2, and 7:3 ratios separately to see how much impact they have on the model.

I test the 9:1, 8:2, and 7:3 ratios separately to see how much impact they have on the model.

Could you tell what strengths and weaknesses can we see in each of these merges? And personally which one is your favorite?

9:1

Sign up or log in to comment