|
--- |
|
license: apache-2.0 |
|
task_categories: |
|
- visual-question-answering |
|
language: |
|
- en |
|
tags: |
|
- Video |
|
- Text |
|
size_categories: |
|
- 1K<n<10K |
|
--- |
|
|
|
<a href="" target="_blank"> |
|
<img alt="arXiv" src="https://img.shields.io/badge/arXiv-thinking--in--space-red?logo=arxiv" height="20" /> |
|
</a> |
|
<a href="https://vision-x-nyu.github.io/thinking-in-space.github.io/" target="_blank"> |
|
<img alt="Website" src="https://img.shields.io/badge/🌎_Website-thinking--in--space-blue.svg" height="20" /> |
|
</a> |
|
<a href="https://github.com/vision-x-nyu/thinking-in-space" target="_blank" style="display: inline-block; margin-right: 10px;"> |
|
<img alt="GitHub Code" src="https://img.shields.io/badge/Code-thinking--in--space-white?&logo=github&logoColor=white" /> |
|
</a> |
|
|
|
|
|
# Visual Spatial Intelligence Benchmark (VSI-Bench) |
|
This repository contains the visual spatial intelligence benchmark (VSI-Bench), introduced in [Thinking in Space: How Multimodal Large Language Models See, Remember and Recall Spaces](https://arxiv.org/pdf/). |
|
|
|
|
|
## Files |
|
The `test-00000-of-00001.parquet` file contains the complete dataset annotations and pre-loaded images, ready for processing with HF Datasets. It can be loaded using the following code: |
|
|
|
```python |
|
from datasets import load_dataset |
|
vsi_bench = load_dataset("nyu-visionx/VSI-Bench") |
|
``` |
|
Additionally, we provide the videos in `*.zip`. |
|
|
|
## Dataset Description |
|
VSI-Bench quantitatively evaluates the visual-spatial intelligence of MLLMs from egocentric video. VSI-Bench comprises over 5,000 question-answer pairs derived from 288 real videos. These videos are sourced from the validation sets of the public indoor 3D scene reconstruction datasets `ScanNet`, `ScanNet++`, and `ARKitScenes`, and represent diverse environments -- including residential spaces, professional settings (e.g., offices, labs), and industrial spaces (e.g., factories) and multiple geographic regions. By repurposing these existing 3D reconstruction and understanding datasets, VSI-Bench benefits from accurate object-level annotations, which are used in question generation and could support future studies exploring the connection between MLLMs and 3D reconstruction. |
|
|
|
The dataset contains the following fields: |
|
|
|
| Field Name | Description | |
|
| :--------- | :---------- | |
|
| `idx` | Global index of the entry in the dataset | |
|
| `dataset` | Video source: `scannet`, `arkitscenes` or `scannetpp` | |
|
| `scene_name` | Scene (video) name for each question-answer pair | |
|
| `question_type` | The type of task for question | |
|
| `question` | Question asked about the video | |
|
| `options` | Choices for the question (only for multiple choice questions) | |
|
| `ground_truth` | Ground truth answer for the question | |
|
|
|
## Evaluation |
|
|
|
VSI-Bench evaluates performance using two metrics: for multiple-choice questions, we use `Accuracy`, calculated based on exact matches. For numerical-answer questions, we introduce a new metric, `MRA (Mean Relative Accuracy)`, to assess how closely model predictions align with ground truth values. |
|
|
|
We provide an out-of-the-box evaluation of VSI-Bench in our [GitHub repository](https://github.com/vision-x-nyu/thinking-in-space), including the [metrics](https://github.com/vision-x-nyu/thinking-in-space/blob/main/lmms_eval/tasks/vsibench/utils.py#L109C1-L155C36) implementation used in our framework. For further detailes, users can refer to our paper and GitHub repository. |
|
|
|
## Citation |
|
|
|
```bibtex |
|
@article{yang2024think, |
|
title={{Thinking in Space: How Multimodal Large Language Models See, Remember and Recall Spaces}}, |
|
author={Yang, Jihan and Yang, Shusheng and Gupta, Anjali and Han, Rilyn and Fei-Fei, Li and Xie, Saining}, |
|
year={2024}, |
|
journal={arXiv preprint arXiv:2412.14171}, |
|
} |
|
``` |