Upload ./README.md with huggingface_hub
Browse files
README.md
CHANGED
@@ -32,7 +32,7 @@ This dataset is for ["2.5 Years in Class: A Multimodal Textbook for Vision-Langu
|
|
32 |
- Our code can be found in [Multimodal-Textbook](https://github.com/DAMO-NLP-SG/multimodal_textbook/tree/master).
|
33 |
|
34 |
|
35 |
-
Note: We have uploaded the annotation file (`./multimodal_textbook.json
|
36 |
|
37 |
|
38 |
<img src="./src/page_fig.png" alt="Image" style="width: 900px;">
|
@@ -83,14 +83,38 @@ Then we extract 6.5M keyframes and 0.75B text (ASR+OCR) tokens from these videos
|
|
83 |
<img src="./src/table.png" alt="Image" style="width: 900px;">
|
84 |
|
85 |
|
86 |
-
## Using
|
87 |
-
### Dataset
|
88 |
-
We provide the json file and corresponding images folder for textbook:
|
89 |
-
- Dataset json-file: `./multimodal_textbook.json` (
|
90 |
-
- Dataset image_folder: `./dataset_images_interval_7.tar.gz` (6.5M image ~ 600GB)
|
91 |
-
-
|
92 |
|
93 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
94 |
|
95 |
```
|
96 |
"images": [
|
@@ -100,31 +124,22 @@ Each sample has approximately 10.7 images and 1927 text tokens. After you downlo
|
|
100 |
null,
|
101 |
......
|
102 |
],
|
103 |
-
|
104 |
null,
|
105 |
-
"
|
106 |
null,
|
107 |
-
"
|
108 |
....
|
109 |
],
|
110 |
```
|
111 |
|
112 |
-
|
113 |
-
|
114 |
-
### Naming Format for keyframe
|
115 |
-
|
116 |
-
For each keyframe, its naming format rule is:
|
117 |
-
`video id@start-time_end-time#keyframe-number.jpg`.
|
118 |
-
For example, the path and file name of a keyframe is
|
119 |
-
`-1uixJ1V-As/[email protected]_55.0#2.jpg`.
|
120 |
-
|
121 |
-
This means that this image is extracted from the video (`-1uixJ1V-As`), more specifically, it is the second keyframe (#2) in the video clip from 10.0 to 55.0 seconds. You can access the original video through [https://www.youtube.com/watch?v=-1uixJ1V-As](https://www.youtube.com/watch?v=-1uixJ1V-As).
|
122 |
|
123 |
|
124 |
|
125 |
|
126 |
-
###
|
127 |
-
The format of the
|
128 |
```
|
129 |
{
|
130 |
"file_path": xxx,
|
@@ -148,25 +163,32 @@ The format of the `video_meta_data/video_meta_data1.json`:
|
|
148 |
},
|
149 |
```
|
150 |
|
151 |
-
In addition, the `multimodal_textbook_meta_data.json.zip` records the textbook in video
|
152 |
|
153 |
```
|
154 |
{'token_num': 1657,
|
155 |
'conversations': [
|
156 |
{
|
157 |
'vid': video id-1,
|
158 |
-
'clip_path':
|
159 |
'asr': ASR transcribed from audio,
|
160 |
-
'extracted_frames': Extract keyframe sequences according to time intervals.,
|
161 |
'image_tokens': xxx,
|
162 |
'token_num': xxx,
|
163 |
'refined_asr': Refine the original ASR,
|
164 |
'ocr_internvl_8b': OCR obtained using internvl_8b,
|
165 |
'ocr_image': the image does OCR come from,
|
166 |
'ocr_internvl_8b_deduplicates': xxx,
|
167 |
-
'keyframe_ssim': Keyframe sequence extracted according to SSIM algorithm
|
168 |
'asr_token_num': xxx,
|
169 |
-
'ocr_qwen2_vl_72b': '
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
170 |
},
|
171 |
{
|
172 |
'vid': 'End of a Video',
|
@@ -176,20 +198,26 @@ In addition, the `multimodal_textbook_meta_data.json.zip` records the textbook i
|
|
176 |
},
|
177 |
{
|
178 |
'vid': video id-2,
|
179 |
-
'clip_path':
|
180 |
'asr': ASR transcribed from audio,
|
181 |
-
'extracted_frames': Extract keyframe sequences according to time intervals.,
|
182 |
-
|
183 |
-
'token_num': xxx,
|
184 |
-
'refined_asr': Refine the original ASR,
|
185 |
-
'ocr_internvl_8b': OCR obtained using internvl_8b,
|
186 |
-
'ocr_image': the image does OCR come from,
|
187 |
-
'ocr_internvl_8b_deduplicates': xxx,
|
188 |
-
'keyframe_ssim': Keyframe sequence extracted according to SSIM algorithm.,
|
189 |
-
'asr_token_num': xxx,
|
190 |
-
'ocr_qwen2_vl_72b': 'OCR obtained using qwen2_vl_72b'
|
191 |
},
|
192 |
....
|
193 |
]
|
194 |
}
|
195 |
```
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
32 |
- Our code can be found in [Multimodal-Textbook](https://github.com/DAMO-NLP-SG/multimodal_textbook/tree/master).
|
33 |
|
34 |
|
35 |
+
Note: We have uploaded the annotation file (`./multimodal_textbook.json`)and image folder (`./dataset_images_interval_7.tar.gz`), which contains keyframes, processed asr and ocr texts. For more details, please refer to [Using Multimodal Textbook](#Using-Multimodal-Textbook)
|
36 |
|
37 |
|
38 |
<img src="./src/page_fig.png" alt="Image" style="width: 900px;">
|
|
|
83 |
<img src="./src/table.png" alt="Image" style="width: 900px;">
|
84 |
|
85 |
|
86 |
+
## Using Multimodal Textbook
|
87 |
+
### Description of Dataset
|
88 |
+
We provide the annotation file (json file) and corresponding images folder for textbook:
|
89 |
+
- Dataset json-file: `./multimodal_textbook.json` (600k samples ~ 11GB)
|
90 |
+
- Dataset image_folder: `./dataset_images_interval_7.tar.gz` (6.5M image ~ 600GB) (**Due to its large size, we split it into 20 sub-files as `dataset_images_interval_7.tar.gz.part_00, dataset_images_interval_7.tar.gz.part_01, ...`**)
|
91 |
+
- Videometa_data: `video_meta_data/video_meta_data1.json` and `video_meta_data/video_meta_data2.json` contains the meta information of the collected videos, including video vid, title, description, duration, language, and searched knowledge points. Besides, we also provide `multimodal_textbook_meta_data.json.zip` records the textbook in its video format, not in the OBELICS format.
|
92 |
|
93 |
+
|
94 |
+
- Original video: You can downloaded original video using our provided video-id in `video_meta_data`.
|
95 |
+
|
96 |
+
|
97 |
+
### Learning about image_folder
|
98 |
+
After you download 20 image segmentation files (`dataset_images_interval_7.tar.gz.part_*`), you need to merge them first and then decompress. Please do not unzip a single segmentation file alone. It will lead to an error.
|
99 |
+
|
100 |
+
```
|
101 |
+
cd multimodal_textbook
|
102 |
+
cat dataset_images_interval_7.tar.gz.part_* > dataset_images_interval_7.tar.gz
|
103 |
+
tar -xzvf dataset_images_interval_7.tar.gz
|
104 |
+
```
|
105 |
+
After the above steps, you will get the image folder `dataset_images_interval_7`, which is approximately 600GB and contains 6 million keyframes. Each sub-folder in the `dataset_images_interval_7` is named with the video id.
|
106 |
+
|
107 |
+
### Naming Rule of keyframe
|
108 |
+
|
109 |
+
For each keyframe, its naming format rule is:
|
110 |
+
`video id@start-time_end-time#keyframe-number.jpg`. For example, the path and file name of a keyframe is `dataset_images_interval_7/-1uixJ1V-As/[email protected]_55.0#2.jpg`.
|
111 |
+
|
112 |
+
This means that this image is extracted from the video (`-1uixJ1V-As`). It is the second keyframe (#2) in the video clip from 10.0 to 55.0 seconds. You can access the original video through [https://www.youtube.com/watch?v=-1uixJ1V-As](https://www.youtube.com/watch?v=-1uixJ1V-As).
|
113 |
+
|
114 |
+
|
115 |
+
|
116 |
+
### Learning about annotation file
|
117 |
+
The format of each sample in `multimodal_textbook.json` is as follows, that is, images and texts are interleaved:
|
118 |
|
119 |
```
|
120 |
"images": [
|
|
|
124 |
null,
|
125 |
......
|
126 |
],
|
127 |
+
"texts": [
|
128 |
null,
|
129 |
+
"Hi everyone, and welcome to another lesson in our Eureka Tips for computers series .....",
|
130 |
null,
|
131 |
+
"I'm actually trying to use the number line to find the sum for each. So to start I'm going to use the paint tool to demonstrate. Let's use the number line for four plus five. We're going to start at four then we're going to count up five. One two three four five. That equals nine. Now let's do three plus six for the next one.",
|
132 |
....
|
133 |
],
|
134 |
```
|
135 |
|
136 |
+
Each sample has approximately 10.7 images and 1927 text tokens. You need to replace the each image path (`/mnt/workspace/zwq_data/interleaved_dataset/`) with your personal image folder path.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
137 |
|
138 |
|
139 |
|
140 |
|
141 |
+
### Learning about metadata of instructional video
|
142 |
+
The format of the `./video_meta_data/video_meta_data1.json`:
|
143 |
```
|
144 |
{
|
145 |
"file_path": xxx,
|
|
|
163 |
},
|
164 |
```
|
165 |
|
166 |
+
In addition, the `multimodal_textbook_meta_data.json.zip` records the textbook in video format. Each "video clip" is stored as a dict. Each sample includes multiple consecutive video clips from the same video. Sometimes one sample may also include video clips from different long videos. When a long video ends, it will store as `End of a Video`.
|
167 |
|
168 |
```
|
169 |
{'token_num': 1657,
|
170 |
'conversations': [
|
171 |
{
|
172 |
'vid': video id-1,
|
173 |
+
'clip_path': video id-1-clip1,
|
174 |
'asr': ASR transcribed from audio,
|
175 |
+
'extracted_frames': Extract keyframe sequences according to time intervals as [image1, image2,....].,
|
176 |
'image_tokens': xxx,
|
177 |
'token_num': xxx,
|
178 |
'refined_asr': Refine the original ASR,
|
179 |
'ocr_internvl_8b': OCR obtained using internvl_8b,
|
180 |
'ocr_image': the image does OCR come from,
|
181 |
'ocr_internvl_8b_deduplicates': xxx,
|
182 |
+
'keyframe_ssim': Keyframe sequence extracted according to SSIM algorithm,
|
183 |
'asr_token_num': xxx,
|
184 |
+
'ocr_qwen2_vl_72b': '...............'
|
185 |
+
},
|
186 |
+
{
|
187 |
+
'vid': video id-1,
|
188 |
+
'clip_path': video id-1-clip2,
|
189 |
+
'asr': ASR transcribed from audio,
|
190 |
+
'extracted_frames': Extract keyframe sequences according to time intervals as [image3, image4,....].,
|
191 |
+
.....
|
192 |
},
|
193 |
{
|
194 |
'vid': 'End of a Video',
|
|
|
198 |
},
|
199 |
{
|
200 |
'vid': video id-2,
|
201 |
+
'clip_path': video id-2-clip1,
|
202 |
'asr': ASR transcribed from audio,
|
203 |
+
'extracted_frames': Extract keyframe sequences according to time intervals as [image5, image6,....].,
|
204 |
+
....
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
205 |
},
|
206 |
....
|
207 |
]
|
208 |
}
|
209 |
```
|
210 |
+
In this example above, the first two video clips are from the same video. Then the third dict represents the end of the current video. The fourth video clip is from a new video.
|
211 |
+
|
212 |
+
|
213 |
+
|
214 |
+
## Citation
|
215 |
+
|
216 |
+
```
|
217 |
+
@article{zhang20252,
|
218 |
+
title={2.5 Years in Class: A Multimodal Textbook for Vision-Language Pretraining},
|
219 |
+
author={Zhang, Wenqi and Zhang, Hang and Li, Xin and Sun, Jiashuo and Shen, Yongliang and Lu, Weiming and Zhao, Deli and Zhuang, Yueting and Bing, Lidong},
|
220 |
+
journal={arXiv preprint arXiv:2501.00958},
|
221 |
+
year={2025}
|
222 |
+
}
|
223 |
+
```
|