Is this good?

#7
by Noxi-V - opened

I know this might me a weird place to ask but I have seen the downloads being around 40k with only this and bart's gguf ones combined, I tried it before and it was decent...
Until it doesn't, it kept acting as me no matter what and I couldn't do anything to it forgetting things, yes I used the recommended settings and I play in usually 30k context size
Is it incompatible with that much context? (I guessed it can handle 30k since it's a nemo mix after all and from using llm explorer, it did say this model also supported to like 128k context)

I know this might me a weird place to ask but I have seen the downloads being around 40k with only this and bart's gguf ones combined, I tried it before and it was decent...
Until it doesn't, it kept acting as me no matter what and I couldn't do anything to it forgetting things, yes I used the recommended settings and I play in usually 30k context size
Is it incompatible with that much context? (I guessed it can handle 30k since it's a nemo mix after all and from using llm explorer, it did say this model also supported to like 128k context)

What quant are you using (q4+ recommended), are you caching context in any way (don't), what are you using as a backend for the model (Oobabooga's WebUI recommended), is the instruct format correct for sure (it might have imported incorrectly due to ST's recently changed format)? If you want help, share some details first, please.

You can check opinions on how the model works here: https://huggingface.co./MarinaraSpaghetti/NemoMix-Unleashed-12B/discussions/1

I used it with 64k context on the q8_0 GGUF, and it was working fine, sometimes forgetful, but that's the magic of LLMs and perplexity. Never played for me, though.

Sign up or log in to comment