Update README.md
Browse files
README.md
CHANGED
@@ -32,7 +32,7 @@ This dataset is for ["2.5 Years in Class: A Multimodal Textbook for Vision-Langu
|
|
32 |
- Our code can be found in [Multimodal-Textbook](https://github.com/DAMO-NLP-SG/multimodal_textbook/tree/master).
|
33 |
|
34 |
|
35 |
-
Note: We have uploaded the annotation file (`./multimodal_textbook.json`), which contains processed asr and ocr texts. Keyframes (`./dataset_images_interval_7.tar.gz`) are still being processed and uploading due to their large size. For more details, please refer to [Using Our Dataset](#using-our-dataset)
|
36 |
|
37 |
|
38 |
<img src="./src/page_fig.png" alt="Image" style="width: 900px;">
|
@@ -87,7 +87,7 @@ Then we extract 6.5M keyframes and 0.75B text (ASR+OCR) tokens from these videos
|
|
87 |
### Dataset
|
88 |
We provide the json file and corresponding images folder for textbook:
|
89 |
- Dataset json-file: `./multimodal_textbook.json` (610k samples ~ 11GB)
|
90 |
-
- Dataset image_folder: `./dataset_images_interval_7.tar.gz` (6.5M image ~
|
91 |
- videometa_data: `video_meta_data/video_meta_data1.json` and `video_meta_data/video_meta_data2.json` represent the meta information of crawled videos, including video vid, title, description, duration, language, and searched knowledge points. `multimodal_textbook_meta_data.json.zip` records the textbook in its original format, not in the OBELICS format.
|
92 |
|
93 |
Each sample has approximately 10.7 images and 1927 text tokens. After you download and unzip the folder, you need to replace the each image path in json file (`/mnt/workspace/zwq_data/interleaved_dataset/`) with your personal image folder path.
|
|
|
32 |
- Our code can be found in [Multimodal-Textbook](https://github.com/DAMO-NLP-SG/multimodal_textbook/tree/master).
|
33 |
|
34 |
|
35 |
+
Note: We have uploaded the annotation file (`./multimodal_textbook.json`), which contains processed asr and ocr texts. Keyframes (`./dataset_images_interval_7.tar.gz`) are still being processed and uploading due to their large size (be split into 20 sub-files). For more details, please refer to [Using Our Dataset](#using-our-dataset)
|
36 |
|
37 |
|
38 |
<img src="./src/page_fig.png" alt="Image" style="width: 900px;">
|
|
|
87 |
### Dataset
|
88 |
We provide the json file and corresponding images folder for textbook:
|
89 |
- Dataset json-file: `./multimodal_textbook.json` (610k samples ~ 11GB)
|
90 |
+
- Dataset image_folder: `./dataset_images_interval_7.tar.gz` (6.5M image ~ 600GB) (**Due to its large size, we split it into 20 sub-files, and it is still being processed and will be uploaded soon.**)
|
91 |
- videometa_data: `video_meta_data/video_meta_data1.json` and `video_meta_data/video_meta_data2.json` represent the meta information of crawled videos, including video vid, title, description, duration, language, and searched knowledge points. `multimodal_textbook_meta_data.json.zip` records the textbook in its original format, not in the OBELICS format.
|
92 |
|
93 |
Each sample has approximately 10.7 images and 1927 text tokens. After you download and unzip the folder, you need to replace the each image path in json file (`/mnt/workspace/zwq_data/interleaved_dataset/`) with your personal image folder path.
|