Request for GGUF support through llama.cpp

#1
by Doctor-Chad-PhD - opened

Dear Tencent Team,

I would like to request the support of GGUF quantization through the llama.cpp library.
As this will allow more users to use your new model.
The repo for llama.cpp can be found here: https://github.com/ggerganov/llama.cpp.
Thank you for considering this request.

Have you created an issue on the Github repo? Since if you do that there will be a bigger chance of it being implemented.

Sign up or log in to comment