Bartowski's picture

Bartowski PRO

bartowski

AI & ML interests

Official model curator for https://lmstudio.ai/

Recent Activity

View all activity

Organizations

LM Studio's profile picture Arcee AI's profile picture Qwen's profile picture Crystal Care AI's profile picture Retis Labs's profile picture NeuroLattice's profile picture Cognitive Computations's profile picture LM Studio Community's profile picture Top Contributors: Model Downloads's profile picture private beta for deeplinks's profile picture Arcee Training Org's profile picture open/ acc's profile picture

bartowski's activity

reacted to fdaudens's post with πŸ”₯❀️ about 2 hours ago
view post
Post
417
Yes, DeepSeek R1's release is impressive. But the real story is what happened in just 7 days after:

- Original release: 8 models, 540K downloads. Just the beginning...

- The community turned those open-weight models into +550 NEW models on Hugging Face. Total downloads? 2.5Mβ€”nearly 5X the originals.

The reason? DeepSeek models are open-weight, letting anyone build on top of them. Interesting to note that the community focused on quantized versions for better efficiency & accessibility. They want models that use less memory, run faster, and are more energy-efficient.

When you empower builders, innovation explodes. For everyone. πŸš€

The most popular community model? @bartowski 's DeepSeek-R1-Distill-Qwen-32B-GGUF version β€” 1M downloads alone.
reacted to ngxson's post with πŸ”₯ 6 days ago
view post
Post
2120
Check out my collection of pre-made GGUF LoRA adapters!

This allow you to use both normal + abliterated version of popular models like llama, qwen, etc, without having to double to amount of VRAM usage.

ngxson/gguf_lora_collection
Β·
reacted to ngxson's post with πŸš€ 18 days ago
replied to their post 21 days ago
view reply

I don't love the period in the name since I don't like using it for purposes other than the file extension

I don't love the underscore either for what it's worth, but period feels wrong haha

- is probably ideal but then those are used in both author and model names already so the distinction between the two becomes blurred

posted an update 21 days ago
view post
Post
14869
Switching to author_model-name

I posted a poll on twitter, and others have mentioned the interest in me using the convention of including the author name in the model path when I upload.

It has a couple advantages, first and foremost of course is ensuring clarity of who uploaded the original model (did Qwen upload Qwen2.6? Or did someone fine tune Qwen2.5 and named it 2.6 for fun?)

The second thing is that it avoids collisions, so if multiple people upload the same model and I try to quant them both, I would normally end up colliding and being unable to upload both

I'll be implementing the change next week, there are just two final details I'm unsure about:

First, should the files also inherit the author's name?

Second, what to do in the case that the author name + model name pushes us past the character limit?

Haven't yet decided how to handle either case, so feedback is welcome, but also just providing this as a "heads up"
  • 3 replies
Β·
replied to their post about 1 month ago
view reply

No it does not include the XS, the reason Q4_0 and IQ4_NL work i think is because they don't do any clever packing of the scaling factors, that's why K quants and IQ4_XS (which is like NL but with some K quant logic) don't work yet

replied to their post about 1 month ago
view reply

oh, yeah, of course.. I added all the ARM quants but then not Q4_0 which is now the only one that would work haha..

I'll go any make a Q4_0 for it I suppose ! just this once

replied to their post about 1 month ago
view reply

Don't love adding more formats but if your results are accurate it does seem worth including

replied to their post about 1 month ago
view reply

I've updated it to "Legacy format, offers online repacking for ARM and AVX CPU inference.", it is still overall legacy but with the online repacking is worth considering for speed

I'm hoping that IQ4_NL gets a few more packing options in the near future

replied to their post about 1 month ago
view reply

hell yeah. wish we could still offline compile, i get why it's not sustainable in the future but also until there's better support and more options would be nice to keep it around

replied to their post about 2 months ago
replied to julien-c's post about 2 months ago
view reply

This makes perfect sense, average users definitely don't need to be uploading that much stuff privately, great for testing but if it's not worth releasing publicly it's not worth storing on servers for free :)

Great update !

reacted to julien-c's post with πŸ”₯❀️ about 2 months ago
view post
Post
8756
After some heated discussion πŸ”₯, we clarify our intent re. storage limits on the Hub

TL;DR:
- public storage is free, and (unless blatant abuse) unlimited. We do ask that you consider upgrading to PRO and/or Enterprise Hub if possible
- private storage is paid above a significant free tier (1TB if you have a paid account, 100GB otherwise)

docs: https://huggingface.co./docs/hub/storage-limits

We optimize our infrastructure continuously to scale our storage for the coming years of growth in Machine learning, to the benefit of the community πŸ”₯

cc: @reach-vb @pierric @victor and the HF team
Β·
posted an update about 2 months ago
view post
Post
25940
Looks like Q4_0_N_M file types are going away

Before you panic, there's a new "preferred" method which is online (I prefer the term on-the-fly) repacking, so if you download Q4_0 and your setup can benefit from repacking the weights into interleaved rows (what Q4_0_4_4 was doing), it will do that automatically and give you similar performance (minor losses I think due to using intrinsics instead of assembly, but intrinsics are more maintainable)

You can see the reference PR here:

https://github.com/ggerganov/llama.cpp/pull/10446

So if you update your llama.cpp past that point, you won't be able to run Q4_0_4_4 (unless they add backwards compatibility back), but Q4_0 should be the same speeds (though it may currently be bugged on some platforms)

As such, I'll stop making those newer model formats soon, probably end of this week unless something changes, but you should be safe to download and Q4_0 quants and use those !

Also IQ4_NL supports repacking though not in as many shapes yet, but should get a respectable speed up on ARM chips, PR for that can be found here: https://github.com/ggerganov/llama.cpp/pull/10541

Remember, these are not meant for Apple silicon since those use the GPU and don't benefit from the repacking of weights
Β·
replied to nyuuzyou's post about 2 months ago
posted an update about 2 months ago
view post
Post
16168
Old mixtral model quants may be broken!

Recently Slaren over on llama.cpp refactored the model loader - in a way that's super awesome and very powerful - but with it came breaking of support for "split tensor MoE models", which applies to older mixtral models

You may have seen my upload of one such older mixtral model, ondurbin/bagel-dpo-8x7b-v0.2, and with the newest changes it seems to be able to run without issue

If you happen to run into issues with any other old mixtral models, drop a link here and I'll try to remake them with the new changes so that we can continue enjoying them :)
  • 2 replies
Β·
reacted to merve's post with ❀️ 2 months ago
view post
Post
3192
your hugging face profile now has your recent activities πŸ€—
replied to their post 3 months ago
view reply

The test mark was after initial upload and after people pointed it out :) glad it is a good label though