Finetuning script using HuggingFace (No llama-factory)
https://github.com/2U1/Qwen2-VL-Finetune
I made a code for who wants to use the huggingface version to finetune, and having difficult using some other frameworks like me.
This code only uses huggingface for fine-tuning the 7B and 2B model.
Also, you can set different learning_rate for vision_model and language_model. ( Also for the merger)
Feedback and issues are welcome!
Thanks for sharing it! Any video demo with this fine-tuning codebase?
@2U1 thanks for the scripts for LORA tuning the model.
I was trying to finetune it on a small dataset ~2000 samples (single image single turn QA)
I was trying to do it on Kaggle with 29GB RAM and 2 * T4 GPUs with 15GB each...but I am always getting into CUDA OOM (no offload, on params offloaded) and RAM OOM if param and optimizer both offloaded to CPU. Is there any way out? What is the suggested compute?
Also, I am using 2B param model for now. Can you throw some light on this? Thanks!
Hello, thank you for sharing the code! I followed all the instructions, so I have the environment with all the packages installed, and the train dataset in the right format.
When i launch the fine-tuning with : bash scripts/finetune_lora_vision.sh --data_path my.json --image_folder myfolder --model_id '/anaconda3/envs/qwen2/lib/python3.10/site-packages/transformers/models/qwen2_vl/'
I have many errors that are related to the flash_attn package: 'ImportError: /anaconda3/envs/qwen2/lib/python3.10/site-packages/flash_attn_2_cuda.cpython-310-x86_64-linux-gnu.so: undefined symbol: _ZNK3c105Error4whatEv'
Do you have any clue about what the problem could be? My version of flash_attn is 2.5.8, of Python is 3.10.14 , CUDA is 12.6.77 and I am working on Ubuntu 20.04.6
@lucreziaT
If so, you can downgrade the torch to torch==2.3.0
.
I'll try some other combination with this again.
Hello, in the end, I had to downgrade CUDA to version 12.1 .
I now have a new issue:
RuntimeError: shape mismatch: value tensor of shape [256, 3584] cannot be broadcast to indexing result of shape [0, 3584]
I see from here: https://huggingface.co./Qwen/Qwen2-VL-7B-Instruct/discussions/33 that I should add a processor.apply_chat_template, but I don't know where. Do you have any clue?
@lucreziaT Does your data looks like
[
{
"id": "000000033471",
"image": "000000033471.jpg",
"conversations": [
{
"from": "human",
"value": "<image>\nWhat are the colors of the bus in the image?"
},
{
"from": "gpt",
"value": "The bus in the image is white and red."
}
]
}
...
]
When you are using my code, You should have <image>\n
in the text.
can you fine tune with more than 1 image? ie: could something below work?
[
{
"id": "000000033471",
"image": ["000000033471.jpg", "image2.jpg", "image3.jpg",],
"conversations": [
{
"from": "human",
"value": "\n\n\nWhat are the colors of the bus in the image?"
},
{
"from": "gpt",
"value": "The bus in the image is white and red."
}
]
}
...
]
Hi, I am working on creating a multimodal chatbot for a specific web application using a multimodal large language model (LLM). For text-based queries, I implement a retrieval mechanism. However, when the user query includes an image, I need to perform fine-tuning to handle such cases.
To achieve this, I scraped various pages of the web application and created some QA pairs using a vision-based LLM. These QA pairs were used to fine-tune the Qwen-2 VL model. Despite experimenting with multiple fine-tuning approaches, none of them have worked effectively.
The issues I encountered include:
The model loses its generalization capability and becomes overfitted to the custom data.
It answers only the questions similar to the training data, failing to handle broader or slightly varied queries.
I ensured the training data was as diverse as possible, yet the problem persists. Could you please help me figure out the issue? Are there any better alternatives or strategies I should consider? @2U1
@Rageshhf
According to this paper https://arxiv.org/pdf/2410.21228
While LoRA does freeze most pre-trained weights, research shows it can introduce "intruder dimensions" that reduce adaptability and degrade generalization, especially when tasks change over time. However it can be a bit more stable when you increase the rank when performing LoRA.
BTW, yes you can use the same dataset.
Okey. Will look into it.
I am experiencing the same problem as @Rageshhf . I am fine-tuning on a smaller specific coastal classification dataset that contains a satellite image with a question and answer prompting style. (1055 images). After training (for 1 epoch with the standard parameters I am experiencing the following:
- The model sometimes forgets that it has the ability to analyse images. (Only thinks it is a language model)
- Starts hallucinating like crazy sometimes and repeats itself a lot.
- In longer conversations, it sometimes starts answering just nothing.
- Performance on training data does improve a bit (loss is near zero, but performance on training data is not close to perfect accuracy)
- Fails to perform inference well on varying the prompt given the same training image.
I have already tried different training scripts, the fine-tuning 8bit, and LoRA training. What do you recommend @2U1 to change? Do you have good results fine-tuning it yourself on a specific dataset? I am currently trying lower learning rates, (maybe they are way too high, so that it overwrites previous memory -> catastrophic forgetting).
Also, if 1 use single Q&A pair for each training sample, would my model lose the ability to do few-shot and longer conversations?
Looking forward for any help! :)
@ascension-hf
I finetuned the model on my own dataset (about 170k image only on specific domain). It did not lose the general ability and it shows better perfornamce than 72b, on the domain I trained.
I've made few scenarios such as multi-turn conversation, describing and so on. This could be the reason my model wans't changed much.
If you have limited data, I think lowering the learning rate and using larger batch size (increase the accumulation step) could lead to a better result.