Building and better understanding vision-language models: insights and future directions Paper • 2408.12637 • Published Aug 22 • 110
Transfusion: Predict the Next Token and Diffuse Images with One Multi-Modal Model Paper • 2408.11039 • Published Aug 20 • 54
Mini-Omni: Language Models Can Hear, Talk While Thinking in Streaming Paper • 2408.16725 • Published 24 days ago • 50
Eagle: Exploring The Design Space for Multimodal LLMs with Mixture of Encoders Paper • 2408.15998 • Published 25 days ago • 81
Cambrian-1: A Fully Open, Vision-Centric Exploration of Multimodal LLMs Paper • 2406.16860 • Published Jun 24 • 55
MM1: Methods, Analysis & Insights from Multimodal LLM Pre-training Paper • 2403.09611 • Published Mar 14 • 123
LongLLaVA: Scaling Multi-modal LLMs to 1000 Images Efficiently via Hybrid Architecture Paper • 2409.02889 • Published 18 days ago • 53
FrozenSeg: Harmonizing Frozen Foundation Models for Open-Vocabulary Segmentation Paper • 2409.03525 • Published 17 days ago • 11
PiTe: Pixel-Temporal Alignment for Large Video-Language Model Paper • 2409.07239 • Published 12 days ago • 11
One missing piece in Vision and Language: A Survey on Comics Understanding Paper • 2409.09502 • Published 8 days ago • 23
Qwen2-VL: Enhancing Vision-Language Model's Perception of the World at Any Resolution Paper • 2409.12191 • Published 4 days ago • 55
MMSearch: Benchmarking the Potential of Large Models as Multi-modal Search Engines Paper • 2409.12959 • Published 3 days ago • 32
InfiMM-WebMath-40B: Advancing Multimodal Pre-Training for Enhanced Mathematical Reasoning Paper • 2409.12568 • Published 4 days ago • 43