File size: 6,747 Bytes
b9af236
 
 
 
 
 
 
fe8a474
 
 
fcd7931
fe8a474
 
 
 
 
 
 
 
 
 
 
 
7702158
fe8a474
 
 
 
 
8c7b712
fe8a474
42b69b5
5c40bb1
fcd7931
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
5c40bb1
 
42b69b5
5c40bb1
 
fcd7931
 
 
 
 
 
 
 
 
5c40bb1
fcd7931
5c40bb1
fcd7931
 
 
 
 
 
5c40bb1
 
f0ef9f0
5c40bb1
f0ef9f0
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
5c40bb1
 
f0ef9f0
5c40bb1
 
f0ef9f0
 
 
 
 
 
 
 
 
5c40bb1
f0ef9f0
5c40bb1
f0ef9f0
 
 
 
 
 
5c40bb1
f0ef9f0
42b69b5
fe8a474
 
5540132
 
 
f0ef9f0
5540132
f0ef9f0
 
 
 
 
 
 
 
 
5c40bb1
 
 
 
 
 
 
 
 
b1f9f4d
 
5c40bb1
b1f9f4d
fedbd57
c0ae6ec
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
fedbd57
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
---
task_categories:
- question-answering
language:
- en
size_categories:
- 1K<n<10K


dataset_info:
  - config_name: Real_Time_Visual_Understanding
    features:
      - name: question_id
        dtype: string
      - name: task_type
        dtype: string
      - name: question
        dtype: string
      - name: time_stamp
        dtype: string
      - name: answer
        dtype: string
      - name: options
        dtype: string
      - name: frames_required
        dtype: string
      - name: temporal_clue_type
        dtype: string
    splits:
      - name: Real_Time_Visual_Understanding
        num_examples: 2500

  - config_name: Sequential_Question_Answering
    features:
      - name: question_id
        dtype: string
      - name: task_type
        dtype: string
      - name: question
        dtype: string
      - name: time_stamp
        dtype: string
      - name: answer
        dtype: string
      - name: options
        dtype: string
      - name: frames_required
        dtype: string
      - name: temporal_clue_type
        dtype: string
    splits:
      - name: Sequential_Question_Answering
        num_examples: 250


  - config_name: Contextual_Understanding
    features:
      - name: question_id
        dtype: string
      - name: task_type
        dtype: string
      - name: question
        dtype: string
      - name: time_stamp
        dtype: string
      - name: answer
        dtype: string
      - name: options
        dtype: string
      - name: frames_required
        dtype: string
      - name: temporal_clue_type
        dtype: string
    splits:
      - name: Contextual_Understanding
        num_examples: 500

  - config_name: Omni_Source_Understanding
    features:
      - name: question_id
        dtype: string
      - name: task_type
        dtype: string
      - name: question
        dtype: string
      - name: time_stamp
        dtype: string
      - name: answer
        dtype: string
      - name: options
        dtype: string
      - name: frames_required
        dtype: string
      - name: temporal_clue_type
        dtype: string
    splits:
      - name: Omni_Source_Understanding
        num_examples: 1000

    
  - config_name: Proactive_Output
    features:
      - name: question_id
        dtype: string
      - name: task_type
        dtype: string
      - name: question
        dtype: string
      - name: time_stamp
        dtype: string
      - name: ground_truth_time_stamp
        dtype: string
      - name: ground_truth_output
        dtype: string
      - name: frames_required
        dtype: string
      - name: temporal_clue_type
        dtype: string
    splits:
      - name: Proactive_Output
        num_examples: 250


configs:
  - config_name: Real_Time_Visual_Understanding
    data_files:
    - split: Real_Time_Visual_Understanding
      path: StreamingBench/Real_Time_Visual_Understanding.csv 
      
  - config_name: Sequential_Question_Answering
    data_files:
    - split: Sequential_Question_Answering
      path: StreamingBench/Sequential_Question_Answering.csv

  - config_name: Contextual_Understanding
    data_files:
    - split: Contextual_Understanding
      path: StreamingBench/Contextual_Understanding.csv
      
  - config_name: Omni_Source_Understanding
    data_files:
    - split: Omni_Source_Understanding
      path: StreamingBench/Omni_Source_Understanding.csv
      
  - config_name: Proactive_Output
    data_files:
    - split: Proactive_Output
      path: StreamingBench/Proactive_Output_50.csv
    - split: Proactive_Output_250
      path: StreamingBench/Proactive_Output.csv
    
---
# StreamingBench: Assessing the Gap for MLLMs to Achieve Streaming Video Understanding

<div align="center">
  <img src="./figs/icon.png" width="100%" alt="StreamingBench Banner">

  <div style="margin: 30px 0">
    <a href="https://streamingbench.github.io/" style="margin: 0 10px">๐Ÿ  Project Page</a> |
    <a href="https://arxiv.org/abs/2411.03628" style="margin: 0 10px">๐Ÿ“„ arXiv Paper</a> |
    <a href="https://huggingface.co./datasets/mjuicem/StreamingBench" style="margin: 0 10px">๐Ÿ“ฆ Dataset</a> |
    <a href="https://streamingbench.github.io/#leaderboard" style="margin: 0 10px">๐Ÿ…Leaderboard</a>
  </div>
</div>

**StreamingBench** evaluates **Multimodal Large Language Models (MLLMs)** in real-time, streaming video understanding tasks. ๐ŸŒŸ

## ๐ŸŽž๏ธ Overview

As MLLMs continue to advance, they remain largely focused on offline video comprehension, where all frames are pre-loaded before making queries. However, this is far from the human ability to process and respond to video streams in real-time, capturing the dynamic nature of multimedia content. To bridge this gap, **StreamingBench** introduces the first comprehensive benchmark for streaming video understanding in MLLMs.

### Key Evaluation Aspects
- ๐ŸŽฏ **Real-time Visual Understanding**: Can the model process and respond to visual changes in real-time?
- ๐Ÿ”Š **Omni-source Understanding**: Does the model integrate visual and audio inputs synchronously in real-time video streams?
- ๐ŸŽฌ **Contextual Understanding**: Can the model comprehend the broader context within video streams?

### Dataset Statistics
- ๐Ÿ“Š **900** diverse videos
- ๐Ÿ“ **4,500** human-annotated QA pairs
- โฑ๏ธ Five questions per video at different timestamps
#### ๐ŸŽฌ Video Categories
<div align="center">
  <img src="./figs/StreamingBench_Video.png" width="80%" alt="Video Categories">
</div>

#### ๐Ÿ” Task Taxonomy
<div align="center">
  <img src="./figs/task_taxonomy.png" width="80%" alt="Task Taxonomy">
</div>


## ๐Ÿ”ฌ Experimental Results

### Performance of Various MLLMs on StreamingBench
- All Context
<div align="center">
  <img src="./figs/result_1.png" width="80%" alt="Task Taxonomy">
</div>

- 60 seconds of context preceding the query time
<div align="center">
  <img src="./figs/result_2.png" width="80%" alt="Task Taxonomy">
</div>

- Comparison of Main Experiment vs. 60 Seconds of Video Context
- <div align="center">
  <img src="./figs/heatmap.png" width="80%" alt="Task Taxonomy">
</div>

### Performance of Different MLLMs on the Proactive Output Task
*"โ‰ค xs" means that the answer is considered correct if the actual output time is within x seconds of the ground truth.*
<div align="center">
  <img src="./figs/po.png" width="80%" alt="Task Taxonomy">
</div>


## ๐Ÿ“ Citation
```bibtex
@article{lin2024streaming,
  title={StreamingBench: Assessing the Gap for MLLMs to Achieve Streaming Video Understanding},
  author={Junming Lin and Zheng Fang and Chi Chen and Zihao Wan and Fuwen Luo and Peng Li and Yang Liu and Maosong Sun},
  journal={arXiv preprint arXiv:2411.03628},
  year={2024}
}
```

https://arxiv.org/abs/2411.03628