-
OpenGVLab/InternVideo2_5_Chat_8B
Video-Text-to-Text • Updated • 15.2k • 44 -
OpenGVLab/InternVL_2_5_HiCo_R16
Video-Text-to-Text • Updated • 785 • 3 -
OpenGVLab/InternVL_2_5_HiCo_R64
Video-Text-to-Text • Updated • 173 • 1 -
InternVideo2.5: Empowering Video MLLMs with Long and Rich Context Modeling
Paper • 2501.12386 • Published • 1

OpenGVLab
community
AI & ML interests
Computer Vision
Recent Activity
Organization Card
OpenGVLab
Welcome to OpenGVLab! We are a research group from Shanghai AI Lab focused on Vision-Centric AI research. The GV in our name, OpenGVLab, means general vision, a general understanding of vision, so little effort is needed to adapt to new vision-based tasks.
Models
- InternVL: a pioneering open-source alternative to GPT-4V.
- InternImage: a large-scale vision foundation models with deformable convolutions.
- InternVideo: large-scale video foundation models for multimodal understanding.
- VideoChat: an end-to-end chat assistant for video comprehension.
- All-Seeing-Project: towards panoptic visual recognition and understanding of the open world.
Datasets
- ShareGPT4o: a groundbreaking large-scale resource that we plan to open-source with 200K meticulously annotated images, 10K videos with highly descriptive captions, and 10K audio files with detailed descriptions.
- InternVid: a large-scale video-text dataset for multimodal understanding and generation.
- MMPR: a high-quality, large-scale multimodal preference dataset.
Benchmarks
- MVBench: a comprehensive benchmark for multimodal video understanding.
- CRPE: a benchmark covering all elements of the relation triplets (subject, predicate, object), providing a systematic platform for the evaluation of relation comprehension ability.
- MM-NIAH: a comprehensive benchmark for long multimodal documents comprehension.
- GMAI-MMBench: a comprehensive multimodal evaluation benchmark towards general medical AI.
Collections
20
spaces
11
Runtime error
InternVideo2.5
💬
Hierarchical Compression for Long-Context Video Modeling
Running
434
InternVL
⚡
Chat with an AI that understands text and images
Running
34
MVBench Leaderboard
🐨
Submit model evaluation and view leaderboard
Running
on
Zero
16
InternVideo2 Chat 8B HD
👁
Upload a video to chat about its contents
Running
10
ControlLLM
🚀
Display maintenance message for ControlLLM
Running
on
Zero
93
VideoMamba
🐍
Classify video and image content
models
165

OpenGVLab/stage1-mm-projectors
Updated

OpenGVLab/stage2-UMT-Qwen2_5_7B_1m-tome16_mlp
Updated
•
2

OpenGVLab/stage2-UMT-Qwen2-7B-tome16_mlp
Updated
•
4

OpenGVLab/stage2-InternVideo2-1B-Qwen2_5-7B-tome16_mlp
Updated
•
6

OpenGVLab/stage2-UMT-Qwen2_5_1.5B-tome16_mlp
Updated
•
4

OpenGVLab/InternVL2-4B
Image-Text-to-Text
•
Updated
•
18.1k
•
50

OpenGVLab/InternImage
Updated
•
15

OpenGVLab/VideoChat-Flash-Qwen2-7B_res448
Video-Text-to-Text
•
Updated
•
2.55k
•
9

OpenGVLab/VideoChat-Flash-Qwen2_5-7B_InternVideo2-1B
Video-Text-to-Text
•
Updated
•
56
•
1

OpenGVLab/VideoChat-Flash-Qwen2_5-7B-1M_res224
Video-Text-to-Text
•
Updated
•
100
•
1
datasets
31
OpenGVLab/VideoChat-Flash-Training-Data
Preview
•
Updated
•
1.71k
•
2
OpenGVLab/MMPR-v1.1
Preview
•
Updated
•
477
•
41
OpenGVLab/MMPR
Preview
•
Updated
•
247
•
47
OpenGVLab/GMAI-MMBench
Preview
•
Updated
•
240
•
15
OpenGVLab/V2PE-Data
Preview
•
Updated
•
718
•
6
OpenGVLab/InternVL-Domain-Adaptation-Data
Preview
•
Updated
•
372
•
9
OpenGVLab/GUI-Odyssey
Viewer
•
Updated
•
7.74k
•
145k
•
12
OpenGVLab/OmniCorpus-YT
Updated
•
422
•
12
OpenGVLab/OmniCorpus-CC-210M
Viewer
•
Updated
•
208M
•
224
•
19
OpenGVLab/OmniCorpus-CC
Viewer
•
Updated
•
986M
•
14.5k
•
13