LLaVa-1.5 Collection LLaVa-1.5 is a series of vision-language models (VLMs) trained on a variety of visual instruction datasets. • 3 items • Updated Mar 18, 2024 • 8
The Multimodal Universe: Enabling Large-Scale Machine Learning with 100TB of Astronomical Scientific Data Paper • 2412.02527 • Published Dec 3, 2024 • 12
Meta Llama 3 Collection This collection hosts the transformers and original repos of the Meta Llama 3 and Llama Guard 2 releases • 5 items • Updated Dec 6, 2024 • 717