Quick update from week 1 of smol course. The community is taking the driving seat and using the material for their own projects. If you want to do the same, join in!
- we have ongoing translation projects in Korean, Vietnamese, Portuguese, and Spanish - 3 chapters are ready for students. On topics like, instruction tuning, preference alignment, and parameter efficient fine tuning - 3 chapters are in progress on evaluation, vision language models, and synthetic data. - around 780 people have forked the repo to use it for learning, teaching, sharing.
⏭️ Next step is to support people that want to use the course for teaching, content creation, internal knowledge sharing, or anything. If you're into this. Drop an issue or PR
For anyone looking to boost their LLM fine-tuning and alignment skills this decemeber. We're running this free and open course called smol course. It’s not big like Li Yin and @mlabonne, it’s just smol.
👷 It focuses on practical use cases, so if you’re working on something, bring it along.
👯♀️ It’s peer reviewed and open so you can discuss and get feedback.
🤘 If you’re already a smol pro, feel free to drop a star or issue.
> > Part 1 starts now, and it’s on instruction tuning!
In case you missed everything this week. It’s all about vision language models and image preference datasets. Here are the models and datasets you can use in your projects.
QWQ-32B-Preview is the first open weights model to reason like o1 with comparable performance. It’s large but is acing some of the hardest tasks.
SmolVLM is a vision implementation of the recently released SmolLM2. It uses the Idefics3 approach to add a vision encoder. The main difference being the smaller language model (8b > 1.7b) and more compression of images. This results in a model that is very accurate for its memory footprint.
ColSmolVLM is a vision embedding model based on SmolVLM using the Colbert approach from ColPali. This is shown to be great at document retrieval and everyone should test it out in their RAG setups.
In an effort to build a FLUX level open source image generation model, the community is building a dataset of image preferences. The dataset is already open and the project is still running. Join in!
TRL tutorial Drop - This week I dropped a load of tutorials on finetuning and aligning models with TRL. If you’re upskilling in this space, you should check these out.
SFT + Quantisation + Unsloth is a super easy way of squeezing extra performance out of an LLM at low latencies. Here are some hand y resources to bootstrap your projects.