Feedback
Disappointing. Very disappointing.
I see that you've made a new model, but you still don't know what made the previous one great.
Here are 6 questions for you to ponder:
- Who are you competing against?
Answer
You are currently competing against LLAMA-405B, Mistral-Large and Deepseek. And indirectly against GPT4. - What kind of words do people hate in LLMs?
Answer
GPTisms. People hate the way ChatGPT speaks, it sticks out like a big neon sign "look, this text was written by AI". A bit of uncanny valley type of feeling. - What did people like about your models?
Answer
It was not the smarts or the benchmarks. It was the writing style. You were local Claude. You had the least amount of GPTslop among the local models. - What have you tuned it on that could have caused a disappointment?
Answer
See question 2. - Why did DBRX(model released at almost the same time as CR with similar benchmarks) fail?
Answer
It had a boring official tune. It was just another GPT-tuned assistant, nothing special. Base model was knowledgeable, but nobody really cared. - What does it all mean?
Answer
LLAMA-405B is the best assistant, Deepseek has coding, Mistral-Large has smarts and NSFW. All of them have something to compensate for sloppy writing style. You had a good writing style to compensate for stupidity, now you don't. You've fucked yourself over by eating GPT poison pill. You have become just another unremarkable assistant tune like DBRX. Nobody has a reason anymore to use it over other models.
I'm just an unenlightened bystander, I'm sure you know better and all, but here is my advice:
Stop competing with GPT4 and all those assistant tunes, we've got more than enough of those. Market is oversaturated. Just give up. Nobody needs another GPTslop assistant tune, a dumb one in particular. If you want to be an assistant so badly, at least don't tune on GPTslop. You know what is lacking? Writer tunes. In proprietary segment there's only Claude and on local... there is nobody now that you have decided to leave. **Please stop tuning on GPTslop. Please compete against Claude. Please return.**CR+ 2024-04 was a breath of fresh air. This one was a letdown. I loaded up my old quant to ensure I wasn't being nostalgic. CR+ 2024-04 was better. It's a shame.
Here are more examples of GPTslop, as you can see, people hate them.
- https://www.reddit.com/r/ChatGPTPro/comments/163ndbh/overused_chatgpt_terms_add_to_my_list/
- https://www.reddit.com/r/ChatGPT/comments/16uloe2/i_tried_adding_a_ban_list_of_overused_words_and/
- https://www.twixify.com/post/most-overused-words-by-chatgpt
- https://www.reddit.com/r/SillyTavernAI/comments/1e6roaw/can_we_get_a_full_list_of_all_the_gptisms/
- https://www.reddit.com/r/LocalLLaMA/comments/18k6nft/which_gptism_in_local_models_annoys_you_the_most/
you can provide your feedback directly in Cohere Discord community
I had disappointment on the API. The literal same prompt started injecting lectures into the dialogue. I tried both versions to make sure.
The 08 CR+ is downloading tonight. Hope it's not a waste. Maybe omitting those top GPT assistant tokens will save it. My experience has been that the local model is much better than on the API. I keep hearing reviews like op's though.
What was gomez saying about things not plateauing?
hey guys i just checked the 35billion its really looks well but this one (plus) is not that much great.
chatting about it here won't help, this isn't their customer support lol...
if the hf space demo isn't working for you, just use their API or web app playground?
https://dashboard.cohere.com/welcome/login
(1000 messages free and after that it's priced competitively)
Hello! I have been using the CohereForAI/c4ai-command-r-plus-08-2024 model for a long time. In the prompts I have the rules for the role-playing game and usually the model recognized them normally. They are written in the usual format:
1 rule - description.
Rule 2 - description. And so on.
About a week ago I entered the chat and found that the model refused to understand my prompts.
Instead of a normal answer, she answers something like:
json [ { "tool_name": "directly-answer", "parameters": {} } ]
I tried to ask the model why this was happening, to which the chat replied that it did not understand the prompts and was trying to accept them in JSON format. Unfortunately, I don't know how to write in this format. What should I do? And why is this happening?
It's pretty clear that the 08-2024 model is nearing the end of its life in hugging chat.
The nail in the coffin is the fact that there's the Command R7B 12-2024 model in the Cohere space. All these errors are what happens when they're getting close to ending the model. It happened with the old Command R model before it got removed and now it's happening with 08-2024. 12-2024 is coming soon.
It's pretty clear that the 08-2024 model is nearing the end of its life in hugging chat.
The nail in the coffin is the fact that there's the Command R7B 12-2024 model in the Cohere space. All these errors are what happens when they're getting close to ending the model. It happened with the old Command R model before it got removed and now it's happening with 08-2024. 12-2024 is coming soon.
I was hoping for it to be at least 01-2025 since it's nearing the end of December and is nearing the New Year. Missed Oppurtunity, lol.
It's pretty clear that the 08-2024 model is nearing the end of its life in hugging chat.
The nail in the coffin is the fact that there's the Command R7B 12-2024 model in the Cohere space. All these errors are what happens when they're getting close to ending the model. It happened with the old Command R model before it got removed and now it's happening with 08-2024. 12-2024 is coming soon.
It better not be full of GPTslop. Cohere, please don't dissapoint again.
I just tried to use that new R7B 12-2024 model in the CoHere space, but it won't work as of right now.
Keeps giving me this error message: "{"message":"invalid request: tool_choice 'required' can only be specified if 'tools' are specified"}"
I see no tools in the individual CoHere space other than the small web search above the prompt giver, and even that tool doesn't work with it.
Already wanting a model for 2025 now.
@TheAGames10 we're looking into it!
@iNeverLearnedHowToRead if you come across errors, can you share the conversation with me? I'd love to dig deeper into this but can't reproduce the issue :(