czczup commited on
Commit
dfc773f
β€’
1 Parent(s): 71c7c74

Update README.md

Browse files
Files changed (2) hide show
  1. README.md +10 -3
  2. config.json +1 -1
README.md CHANGED
@@ -15,9 +15,9 @@ new_version: OpenGVLab/InternViT-6B-448px-V2_5
15
 
16
  # InternViT-6B-448px-V1-5
17
 
18
- [\[πŸ“‚ GitHub\]](https://github.com/OpenGVLab/InternVL) [\[πŸ†• Blog\]](https://internvl.github.io/blog/) [\[πŸ“œ InternVL 1.0\]](https://arxiv.org/abs/2312.14238) [\[πŸ“œ InternVL 1.5\]](https://arxiv.org/abs/2404.16821) [\[πŸ“œ Mini-InternVL\]](https://arxiv.org/abs/2410.16261)
19
 
20
- [\[πŸ—¨οΈ Chat Demo\]](https://internvl.opengvlab.com/) [\[πŸ€— HF Demo\]](https://huggingface.co/spaces/OpenGVLab/InternVL) [\[πŸš€ Quick Start\]](#quick-start) [\[πŸ“– 中文解读\]](https://zhuanlan.zhihu.com/p/706547971) [\[πŸ“– Documents\]](https://internvl.readthedocs.io/en/latest/)
21
 
22
  <div align="center">
23
  <img width="500" alt="image" src="https://cdn-uploads.huggingface.co/production/uploads/64006c09330a45b03605bba3/zJsd2hqd3EevgXo6fNgC-.png">
@@ -35,7 +35,10 @@ We develop InternViT-6B-448px-V1-5 based on the pre-training of the strong found
35
  To enhance the OCR capability of the model, we have incorporated additional OCR data alongside the general caption datasets. Specifically, we utilized PaddleOCR to perform Chinese OCR on images from Wukong and English OCR on images from LAION-COCO.
36
  - **Note:** InternViT-6B originally had 48 blocks, and we found that using the output after the fourth-to-last block worked best for MLLM. For ease of use and to save GPU memory, we simply discarded the last 3 blocks. Now, the model has only 45 blocks and the number of parameters has been reduced from 5.9B to 5.5B. Therefore, if you want to build a MLLM based on this model, **please make use of the features from the last layer.**
37
 
38
- ## Model Usage (Image Embeddings)
 
 
 
39
 
40
  ```python
41
  import torch
@@ -58,6 +61,10 @@ pixel_values = pixel_values.to(torch.bfloat16).cuda()
58
  outputs = model(pixel_values)
59
  ```
60
 
 
 
 
 
61
  ## Citation
62
 
63
  If you find this project useful in your research, please consider citing:
 
15
 
16
  # InternViT-6B-448px-V1-5
17
 
18
+ [\[πŸ“‚ GitHub\]](https://github.com/OpenGVLab/InternVL) [\[πŸ†• Blog\]](https://internvl.github.io/blog/) [\[πŸ“œ InternVL 1.0\]](https://arxiv.org/abs/2312.14238) [\[πŸ“œ InternVL 1.5\]](https://arxiv.org/abs/2404.16821) [\[πŸ“œ InternVL 2.5\]](https://github.com/OpenGVLab/InternVL/blob/main/InternVL2_5_report.pdf)
19
 
20
+ [\[πŸ—¨οΈ Chat Demo\]](https://internvl.opengvlab.com/) [\[πŸ€— HF Demo\]](https://huggingface.co/spaces/OpenGVLab/InternVL) [\[πŸš€ Quick Start\]](#quick-start) [\[πŸ“– Documents\]](https://internvl.readthedocs.io/en/latest/)
21
 
22
  <div align="center">
23
  <img width="500" alt="image" src="https://cdn-uploads.huggingface.co/production/uploads/64006c09330a45b03605bba3/zJsd2hqd3EevgXo6fNgC-.png">
 
35
  To enhance the OCR capability of the model, we have incorporated additional OCR data alongside the general caption datasets. Specifically, we utilized PaddleOCR to perform Chinese OCR on images from Wukong and English OCR on images from LAION-COCO.
36
  - **Note:** InternViT-6B originally had 48 blocks, and we found that using the output after the fourth-to-last block worked best for MLLM. For ease of use and to save GPU memory, we simply discarded the last 3 blocks. Now, the model has only 45 blocks and the number of parameters has been reduced from 5.9B to 5.5B. Therefore, if you want to build a MLLM based on this model, **please make use of the features from the last layer.**
37
 
38
+ ## Quick Start
39
+
40
+ > \[!Warning\]
41
+ > 🚨 Note: In our experience, the InternViT V2.5 series is better suited for building MLLMs than traditional computer vision tasks.
42
 
43
  ```python
44
  import torch
 
61
  outputs = model(pixel_values)
62
  ```
63
 
64
+ ## License
65
+
66
+ This project is released under the MIT License.
67
+
68
  ## Citation
69
 
70
  If you find this project useful in your research, please consider citing:
config.json CHANGED
@@ -25,7 +25,7 @@
25
  "qk_normalization": true,
26
  "qkv_bias": false,
27
  "torch_dtype": "bfloat16",
28
- "transformers_version": "4.36.2",
29
  "use_bfloat16": true,
30
  "use_flash_attn": true
31
  }
 
25
  "qk_normalization": true,
26
  "qkv_bias": false,
27
  "torch_dtype": "bfloat16",
28
+ "transformers_version": "4.37.2",
29
  "use_bfloat16": true,
30
  "use_flash_attn": true
31
  }