GGUF Version
First of all, thanks a lot for your mixture of expert model!
Just out of curiosity, which models did you merge?
@TheBloke could you please provide us with a gguf quant version? =)
Thank you all for the awesome work you do for the community!
@TheBloke this would be great!
yeah waiting
Leaderboard shows some impressive results...
Strange
@TheBloke
is not doing gguf for this model.
EDIT: i meant it doesnt seem so as its 16th and he must have received the notifications regarding this post.
TheBloke is not your slave. Is it too hard to call a Python script to do it yourself ?
TheBloke is not your slave. Is it too hard to call a Python script to do it yourself ?
I said "please consider". Maybe you need to learn to read better
TheBloke is not your slave. Is it too hard to call a Python script to do it yourself ?
among the tons of models he compiles , what's wrong with doing one on the leaderboard.
Also isn't that the reason people know him?
I have generated the gguf quantized version of the model.
The files can be found at https://huggingface.co./Nan-Do/FusionNet_7Bx2_MoE_14B-GGUF
I have generated the gguf quantized version of the model.
The files can be found at https://huggingface.co./Nan-Do/FusionNet_7Bx2_MoE_14B-GGUF
Thx @Nan-Do , I will try it asap
@Nan-Do 's quants does not work for me (it generates random tokens) so I made my own basic quants of this model.
https://huggingface.co./Rybens/FusionNet_7Bx2_MoE_14B_gguf
Thanks
@Nan-Do
But I'll leave my repository with quants in case anyone needs it
@Rybens sure, that's good.
Thanks @Nan-Do
But I'll leave my repository with quants in case anyone needs it
Just tried out the model in LM Studio. Works very well! I'm impressed!