What setting must be used? Model fails to load in oobabooga.

#5
by YuriGagarine - opened

The Oobabooga webui downloads the model fine but then only loads it with errors. What settings must be used? (I've tied wbits:4, groupsize: 128, llama)

Traceback (most recent call last):
File “D:\oobabooga_windows\text-generation-webui\server.py”, line 59, in load_model_wrapper
shared.model, shared.tokenizer = load_model(shared.model_name)
File “D:\oobabooga_windows\text-generation-webui\modules\models.py”, line 159, in load_model
model = load_quantized(model_name)
File “D:\oobabooga_windows\text-generation-webui\modules\GPTQ_loader.py”, line 170, in load_quantized
exit()
File “D:\oobabooga_windows\installer_files\env\lib_sitebuiltins.py”, line 26, in call
raise SystemExit(code)
SystemExit: None

I have the same problem...

I got it working with wbits and groupsize blank.
Model type = llama
Load in 8-bit checked.

Cognitive Computations org

It's not 8 bit. It's native.

Cognitive Computations org

I'll make a blog post tomorrow

Cognitive Computations org
Cognitive Computations org

"GPTQ_loader.py" means you are trying to load it as quantized.
It's not quantized.

"GPTQ_loader.py" means you are trying to load it as quantized.
It's not quantized.

Thank you and wildstar50 - much appreciated.

@ehartford do you have any advice for hosting this model using the huggingface "inference endpoints" or "spaces?"

I'm interested in experimenting with models but just wasted all my cash on an AMD card so all I have is a huggingface account and a credit limit

edit: Download and attempted to load with oobabooga and it looks like it'll only run with CUDA support :(

thanks friend "wildstars50" it was true I put it as you indicated and it works fine.

Sign up or log in to comment