zamal/Molmo-4bit Thrilled to announce that the Molmo 7B 4-bit Space is now live! π The model size has been reduced by six times with almost no performance loss, and the results will leave you amazed!
It runs on zero GPU, making it incredibly accessible for everyone!
Yesterday, I shared a blog post on generating data for fine-tuning ColPali using the Qwen/Qwen2-VL-7B-Instruct model.
To simplify testing this approach, I created a Space that lets you generate queries from an input document page image: davanstrien/ColPali-Query-Generator
I think there is much room for improvement, but I'm excited about the potential for relatively small VLMs to create synthetic data.