Kaizhao Liang

kz919

AI & ML interests

Multimodal foundational model

Recent Activity

Organizations

SambaNova Systems's profile picture Ontocord's M*DEL's profile picture Sambanova-Gradio-Hackathon's profile picture

kz919's activity

reacted to rwightman's post with πŸ”₯πŸš€ 22 days ago
view post
Post
1309
There's a new timm release, v 1.0.12, with a focus on optimizers. The optimizer factory has been refactored, there's now a timm.optim.list_optimizers() and new way to register optimizers and their attributes. As always you can use an timm optimizer like a torch one, just replace torch.optim with timm.optim

New optimizers include:
* AdafactorBigVision - adfactorbv
* ADOPT - adopt / adoptw (decoupled decay)
* MARS - mars
* LaProp - laprop
* Cautious Optimizers - a modification to all of the above, prefix with c as well as cadamw, cnadamw, csgdw, clamb, crmsproptf

I shared some caution comparisons in this model repo: rwightman/timm-optim-caution

For details, references, see the code: https://github.com/huggingface/pytorch-image-models/tree/main/timm/optim

  • 3 replies
Β·
reacted to qq8933's post with πŸ”₯ 22 days ago
view post
Post
3037
  • 3 replies
Β·
reacted to MonsterMMORPG's post with πŸ‘πŸ€πŸ€―πŸ§ βž•πŸ˜ŽπŸ€—β€οΈπŸ‘€πŸš€πŸ”₯ 2 months ago
view post
Post
2944
Huge FLUX LoRA vs Fine Tuning / DreamBooth Experiments Completed, Moreover Batch Size 1 vs 7 Fully Tested as Well, Not Only for Realism But Also for Stylization - 15 vs 256 images having datasets compared as well (expressions / emotions tested too) - Used Kohya GUI for training

Full files and article : https://www.patreon.com/posts/112099700

Download images in full resolution to see prompts and model names

All trainings are done with Kohya GUI, perfectly can be done locally on Windows, and all trainings were 1024x1024 pixels

Fine Tuning / DreamBooth works as low as 6 GB GPUs (0 quality degrade totally same as 48 GB config)

Best quality of LoRA requires 48 GB GPUs , 24 GB also works really good and minimum 8 GB GPU is necessary for LoRA (lots of quality degrade)

Full size grids are also shared for the followings: https://www.patreon.com/posts/112099700

Additionally, I have shared full training entire logs that you can see each checkpoint took time. I have shared best checkpoints, their step count and took time according to being either LoRA, Fine Tuning or Batch size 1 or 7 or 15 images or 256 images, so a very detailed article regarding completed.

Check the images to see all shared files in the post.

Furthermore, a very very detailed analysis having article written and all latest DreamBooth / Fine Tuning configs and LoRA configs are shared with Kohya GUI installers for both Windows, Runpod and Massed Compute.

Moreover, I have shared new 28 realism and 37 stylization testing prompts.

Current tutorials are as below:

Windows requirements CUDA, Python, cuDNN, and such : https://youtu.be/DrhUHnYfwC0

How to use SwarmUI : https://youtu.be/HKX8_F1Er_w

How to use FLUX on SwarmUI : https://youtu.be/bupRePUOA18

How to use Kohya GUI for FLUX training : https://youtu.be/nySGu12Y05k

How to use Kohya GUI for FLUX training on Cloud (RunPod and Massed Compute) : https://youtu.be/-uhL2nW7Ddw

reacted to their post with 😎πŸ”₯πŸš€ 3 months ago
view post
Post
1285
Just for the meme.

But the clear lesson I learnt from building these demos are, the more powerful the underlying base model is, the closer you will get to GPT4o1. CoT is nothing more than simply inducing the latent reasoning capability from the model.

kz919/GPT4-O1-Proximas
posted an update 3 months ago
view post
Post
1285
Just for the meme.

But the clear lesson I learnt from building these demos are, the more powerful the underlying base model is, the closer you will get to GPT4o1. CoT is nothing more than simply inducing the latent reasoning capability from the model.

kz919/GPT4-O1-Proximas
posted an update 3 months ago
reacted to cbensimon's post with ❀️ 3 months ago
view post
Post
4346
Hello everybody,

We've rolled out a major update to ZeroGPU! All the Spaces are now running on it.

Major improvements:

1. GPU cold starts about twice as fast!
2. RAM usage reduced by two-thirds, allowing more effective resource usage, meaning more GPUs for the community!
3. ZeroGPU initializations (coldstarts) can now be tracked and displayed (use progress=gr.Progress(track_tqdm=True))
4. Improved compatibility and PyTorch integration, increasing ZeroGPU compatible spaces without requiring any modifications!

Feel free to answer in the post if you have any questions

πŸ€— Best regards,
Charles