czczup commited on
Commit
73bb495
·
verified ·
1 Parent(s): a5d2234

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +5 -7
README.md CHANGED
@@ -81,7 +81,7 @@ The training pipeline for a single model in InternVL 2.5 is structured across th
81
 
82
  We introduce a progressive scaling strategy to align the vision encoder with LLMs efficiently. This approach trains with smaller LLMs first (e.g., 20B) to optimize foundational visual capabilities and cross-modal alignment before transferring the vision encoder to larger LLMs (e.g., 72B) without retraining. This reuse skips intermediate stages for larger models.
83
 
84
- ![image/png](https://cdn-uploads.huggingface.co/production/uploads/64119264f0f81eb569e0d569/AVb_PSxhJq1z2eUFNYoqQ.png)
85
 
86
  Compared to Qwen2-VL's 1.4 trillion tokens, InternVL2.5-78B uses only 120 billion tokens—less than one-tenth. This strategy minimizes redundancy, maximizes pre-trained component reuse, and enables efficient training for complex vision-language tasks.
87
 
@@ -164,7 +164,7 @@ As shown in the following figure, from InternVL 1.5 to 2.0 and then to 2.5, the
164
 
165
  ### Video Understanding
166
 
167
- ![image/png](https://cdn-uploads.huggingface.co/production/uploads/64119264f0f81eb569e0d569/uD5aYt2wNYL94Xn8MOVih.png)
168
 
169
  ## Evaluation on Language Capability
170
 
@@ -511,10 +511,10 @@ Many repositories now support fine-tuning of the InternVL series models, includi
511
 
512
  ### LMDeploy
513
 
514
- LMDeploy is a toolkit for compressing, deploying, and serving LLM, developed by the MMRazor and MMDeploy teams.
515
 
516
  ```sh
517
- pip install lmdeploy>=0.5.3
518
  ```
519
 
520
  LMDeploy abstracts the complex inference process of multi-modal Vision-Language Models (VLM) into an easy-to-use pipeline, similar to the Large Language Model (LLM) inference pipeline.
@@ -538,8 +538,6 @@ If `ImportError` occurs while executing this case, please install the required d
538
 
539
  When dealing with multiple images, you can put them all in one list. Keep in mind that multiple images will lead to a higher number of input tokens, and as a result, the size of the context window typically needs to be increased.
540
 
541
- question = 'Describe this video in detail.'
542
-
543
  ```python
544
  from lmdeploy import pipeline, TurbomindEngineConfig
545
  from lmdeploy.vl import load_image
@@ -603,7 +601,7 @@ print(sess.response.text)
603
  LMDeploy's `api_server` enables models to be easily packed into services with a single command. The provided RESTful APIs are compatible with OpenAI's interfaces. Below are an example of service startup:
604
 
605
  ```shell
606
- lmdeploy serve api_server OpenGVLab/InternVL2_5-26B --backend turbomind --server-port 23333
607
  ```
608
 
609
  To use the OpenAI-style interface, you need to install OpenAI:
 
81
 
82
  We introduce a progressive scaling strategy to align the vision encoder with LLMs efficiently. This approach trains with smaller LLMs first (e.g., 20B) to optimize foundational visual capabilities and cross-modal alignment before transferring the vision encoder to larger LLMs (e.g., 72B) without retraining. This reuse skips intermediate stages for larger models.
83
 
84
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/64006c09330a45b03605bba3/UoNUyS7ctN5pBxNv9KnzH.png)
85
 
86
  Compared to Qwen2-VL's 1.4 trillion tokens, InternVL2.5-78B uses only 120 billion tokens—less than one-tenth. This strategy minimizes redundancy, maximizes pre-trained component reuse, and enables efficient training for complex vision-language tasks.
87
 
 
164
 
165
  ### Video Understanding
166
 
167
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/64006c09330a45b03605bba3/tcwH-i1qc8H16En-7AZ5M.png)
168
 
169
  ## Evaluation on Language Capability
170
 
 
511
 
512
  ### LMDeploy
513
 
514
+ LMDeploy is a toolkit for compressing, deploying, and serving LLMs & VLMs.
515
 
516
  ```sh
517
+ pip install lmdeploy>=0.6.4
518
  ```
519
 
520
  LMDeploy abstracts the complex inference process of multi-modal Vision-Language Models (VLM) into an easy-to-use pipeline, similar to the Large Language Model (LLM) inference pipeline.
 
538
 
539
  When dealing with multiple images, you can put them all in one list. Keep in mind that multiple images will lead to a higher number of input tokens, and as a result, the size of the context window typically needs to be increased.
540
 
 
 
541
  ```python
542
  from lmdeploy import pipeline, TurbomindEngineConfig
543
  from lmdeploy.vl import load_image
 
601
  LMDeploy's `api_server` enables models to be easily packed into services with a single command. The provided RESTful APIs are compatible with OpenAI's interfaces. Below are an example of service startup:
602
 
603
  ```shell
604
+ lmdeploy serve api_server OpenGVLab/InternVL2_5-26B --server-port 23333
605
  ```
606
 
607
  To use the OpenAI-style interface, you need to install OpenAI: