KeyError: 'qwen2_vl' loading from Transformers
#42
by
KevalRx
- opened
I am attempting to load the model from transformers library and getting an error. Here are instructions from official documentation: https://huggingface.co./Qwen/Qwen2-VL-7B-Instruct?library=transformers
I am on: transformers 4.44.2
# Load model directly
from transformers import AutoProcessor, AutoModelForSeq2SeqLM
processor = AutoProcessor.from_pretrained("Qwen/Qwen2-VL-7B-Instruct")
model = AutoModelForSeq2SeqLM.from_pretrained("Qwen/Qwen2-VL-7B-Instruct")
---------------------------------------------------------------------------
KeyError Traceback (most recent call last)
/usr/local/lib/python3.10/dist-packages/transformers/models/auto/configuration_auto.py in from_pretrained(cls, pretrained_model_name_or_path, **kwargs)
992 try:
--> 993 config_class = CONFIG_MAPPING[config_dict["model_type"]]
994 except KeyError:
3 frames
KeyError: 'qwen2_vl'
During handling of the above exception, another exception occurred:
ValueError Traceback (most recent call last)
/usr/local/lib/python3.10/dist-packages/transformers/models/auto/configuration_auto.py in from_pretrained(cls, pretrained_model_name_or_path, **kwargs)
993 config_class = CONFIG_MAPPING[config_dict["model_type"]]
994 except KeyError:
--> 995 raise ValueError(
996 f"The checkpoint you are trying to load has model type `{config_dict['model_type']}` "
997 "but Transformers does not recognize this architecture. This could be because of an "
ValueError: The checkpoint you are trying to load has model type `qwen2_vl` but Transformers does not recognize this architecture. This could be because of an issue with the checkpoint, or because your version of Transformers is out of date.
Hey @KevalRx ,
You can maybe try upgrading transformers to v4.45.2 and load the model as suggested in Quickstart section of https://huggingface.co./Qwen/Qwen2-VL-7B-Instruct.