Why the original GGUF is quite large ?

#11
by nopainkiller - opened

Does it suppose to be much smaller than the model itself ?

16bit vs. 32bit?

How can it be that the quantized version of GGUF (34GB) weighs double compared to the non-quantized version (17GB)? Isn't this a mistake?

I did a quick Q4_K_M of the Gemma-2B myself: https://huggingface.co./nopainkiller/Gemma-2B-GGUF/tree/main, it is not functioning with llama.cpp somehow with error "llama_model_load: error loading model: create_tensor: tensor 'output.weight' not found" (I ran with the latest pull)

GGUF is not quantized, it's a format. You can have GGUF in 16-bit or 32-bit which is the full size of the model. Why is this one larger than the original model if it's the original and none-quantized? I guess the safetensors are distributed in 16-bit while the GGUF is a full precision in 32bit? (or something is not right, but you can convert the model in 16bit and its full glory to GGUF format without quantizing it) - in fact, there are 2 different scripts, one converts, and one quantize if anyone wishes.

I did a quick Q4_K_M of the Gemma-2B myself: https://huggingface.co./nopainkiller/Gemma-2B-GGUF/tree/main, it is not functioning with llama.cpp somehow with error "llama_model_load: error loading model: create_tensor: tensor 'output.weight' not found" (I ran with the latest pull)

should work, https://github.com/ggerganov/llama.cpp/commit/580111d42b3b6ad0a390bfb267d6e3077506eb31

I did a quick Q4_K_M of the Gemma-2B myself: https://huggingface.co./nopainkiller/Gemma-2B-GGUF/tree/main, it is not functioning with llama.cpp somehow with error "llama_model_load: error loading model: create_tensor: tensor 'output.weight' not found" (I ran with the latest pull)

should work, https://github.com/ggerganov/llama.cpp/commit/580111d42b3b6ad0a390bfb267d6e3077506eb31

Yes I just build from the latest and got the error I mentioned , any ideas ?

I did a quick Q4_K_M of the Gemma-2B myself: https://huggingface.co./nopainkiller/Gemma-2B-GGUF/tree/main, it is not functioning with llama.cpp somehow with error "llama_model_load: error loading model: create_tensor: tensor 'output.weight' not found" (I ran with the latest pull)

it fails for me too, and I had the latest pulled and built from llama.cpp https://github.com/ggerganov/llama.cpp/issues/5635

gguf is parameters used by llama.cpp. You can generate it by llama.cpp/coverter.py from hugging face safe tensor bins.

But still why is it larger than the original parameters (saf tensors size) ?

@MaziyarPanahi @nopainkiller I doubt whether the gguf works in llama.cpp, @ggerganov has made a patch on the params writer few hours ago, so you may try the latest converter.

The reason for the size is that it is indeed stored in float32, it was confirmed in this discussion by a Google staff member.

As to why it's bigger than the original model, that is because the original model is stored in bfloat16, rather than plain float16. Bfloat16 is a special format that is the same size as float16 but has the exponent number range of float32. GGUF does not support Bfloat16 so you either have to lose accuracy by converting it to float16 or retain accuracy but sacrifice the space savings by converting it to float32, which is what Google did.

hi @Mikael110 converting fp16 from bfloat16 won't loses too much accuracy. I think the problem is that the llama.cpp converter doesn't check against bfloat16, and just converted model to float32 if it is not float16.

I tried the official GGUF model by Google in Llama.cpp (with the latest changes the did to support it) and it works. However, due to this issue the quants (all of them) don't have the quality: https://github.com/ggerganov/llama.cpp/issues/5635

So I am waiting for this PR to be merged and re-run the whole quant again : https://github.com/ggerganov/llama.cpp/pull/5650

I tried the official GGUF model by Google in Llama.cpp (with the latest changes the did to support it) and it works. However, due to this issue the quants (all of them) don't have the quality: https://github.com/ggerganov/llama.cpp/issues/5635

So I am waiting for this PR to be merged and re-run the whole quant again : https://github.com/ggerganov/llama.cpp/pull/5650

Weird enough lmstudio folks seems to get the instruct ones Q4_K_M and Q8 working

Has somebody got the https://huggingface.co./google/gemma-7b/blob/main/gemma-7b.gguf 32 GB working on Windows Core i7 , 16 GB RAM, 8 GB Internal UHD GRAPHICS Card? and using Llama CPP Python ? I am using Llama CPP Python 0.2.56 (Latest as of 19 March 2024) and just this (below) hangs Google Gemma 7B GGUF (one which is the 32 GB File) with almost continued 80 to 90 percent CPU and without any response.
from llama_cpp import Llama
...
modpathGemma = "llm_models/gemma-7b.gguf"
llmGemma = Llama(model_path=modpathGemma, use_mmap="true", n_gpu_layers=-1, max_tokens=2048, max_new_tokens=1024, context_length=2048)

All it does is spew out the Gemma Metadata in the console / terminal.

Has somebody got the https://huggingface.co./google/gemma-7b/blob/main/gemma-7b.gguf 32 GB working on Windows Core i7 , 16 GB RAM, 8 GB Internal UHD GRAPHICS Card? and using Llama CPP Python ? I am using Llama CPP Python 0.2.56 (Latest as of 19 March 2024) and just this (below) hangs Google Gemma 7B GGUF (one which is the 32 GB File) with almost continued 80 to 90 percent CPU and without any response.
from llama_cpp import Llama
...
modpathGemma = "llm_models/gemma-7b.gguf"
llmGemma = Llama(model_path=modpathGemma, use_mmap="true", n_gpu_layers=-1, max_tokens=2048, max_new_tokens=1024, context_length=2048)

All it does is spew out the Gemma Metadata in the console / terminal.

You're asking for too many tokens and context length, try reducing it first:

max_tokens=512, max_new_tokens=256, context_length=512

Sign up or log in to comment