
LMMs-Lab
AI & ML interests
Feeling and building the multimodal intelligence.
Recent Activity
[2025-3] 👓👓 Introducing
EgoLife
: Towards Egocentric Life Assistant. For one week, six individuals lived together, capturing every moment through AI glasses, and creating the EgoLife dataset. Based on this we build models and benchmarks to drive the future of AI life assistants that capable of recalling past events, tracking habits, and providing personalized, long-context assistance to enhance daily life.[2025-1] 🎬🎬 Introducing
VideoMMMU
: Evaluating Knowledge Acquisition from Professional Videos. Spanning 6 professional disciplines (Art, Business, Science, Medicine, Humanities, Engineering) and 30 diverse subjects, Video-MMMU challenges models to learn and apply college-level knowledge from videos.[2024-11] 🔔🔔 We are excited to introduce LMMs-Eval/v0.3.0, focusing on audio understanding. Building upon LMMs-Eval/v0.2.0, we have added audio models and tasks. Now, LMMs-Eval provides a consistent evaluation toolkit across image, video, and audio modalities.
[2024-11] 🤯🤯 We introduce Multimodal SAE, the first framework designed to interpret learned features in large-scale multimodal models using Sparse Autoencoders. Through our approach, we leverage LLaVA-OneVision-72B to analyze and explain the SAE-derived features of LLaVA-NeXT-LLaMA3-8B. Furthermore, we demonstrate the ability to steer model behavior by clamping specific features to alleviate hallucinations and avoid safety-related issues.
[2024-10] 🔥🔥 We present
LLaVA-Critic
, the first open-source large multimodal model as a generalist evaluator for assessing LMM-generated responses across diverse multimodal tasks and scenarios.[2024-10] 🎬🎬 Introducing
LLaVA-Video
, a family of open large multimodal models designed specifically for advanced video understanding. We're open-sourcing LLaVA-Video-178K, a high-quality, synthetic dataset for video instruction tuning.[2024-08] 🤞🤞 We present
LLaVA-OneVision
, a family of LMMs developed by consolidating insights into data, models, and visual representations.[2024-06] 🧑🎨🧑🎨 We release
LLaVA-NeXT-Interleave
, an LMM extending capabilities to real-world settings: Multi-image, Multi-frame (videos), Multi-view (3D), and Multi-patch (single-image).[2024-06] 🚀🚀 We release
LongVA
, a long language model with state-of-the-art video understanding performance.
Older Updates (2024-06 and earlier)
[2024-06] 🎬🎬 The
lmms-eval/v0.2
toolkit now supports video evaluations for models like LLaVA-NeXT Video and Gemini 1.5 Pro.[2024-05] 🚀🚀 We release
LLaVA-NeXT Video
, a model performing at Google's Gemini level on video understanding tasks.[2024-05] 🚀🚀 The
LLaVA-NeXT
model family reaches near GPT-4V performance on multimodal benchmarks, with models up to 110B parameters.[2024-03] We release
lmms-eval
, a toolkit for holistic evaluations with 50+ multimodal datasets and 10+ models.
Collections
11
spaces
4
models
46

lmms-lab/EgoGPT-0.5b-Demo

lmms-lab/EgoGPT-7b-EgoIT

lmms-lab/EgoGPT-7b-EgoIT-EgoLife

lmms-lab/EgoGPT-7b-Demo

lmms-lab/LLaVA-NeXT-Video-7B-DPO

lmms-lab/LLaVA-NeXT-Video-7B

lmms-lab/Qwen2-VL-2B-GRPO-8k

lmms-lab/Qwen2-VL-7B-GRPO-8k

lmms-lab/llama3-llava-next-8b-hf-sae-131k
