'LlamaForCausalLM' object has no attribute 'max_seq_length'

#8
by AronVic - opened

Traceback (most recent call last):
File ".\unsloth\fine-tuning-use-local-model.py", line 87, in
model = FastLanguageModel.get_peft_model(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File ".\unsloth\unsloth\models\llama.py", line 1632, in get_peft_model
assert max_seq_length <= model.max_seq_length
^^^^^^^^^^^^^^^^^^^^
File ".\torch\nn\modules\module.py", line 1688, in getattr
raise AttributeError(f"'{type(self).name}' object has no attribute '{name}'")
AttributeError: 'LlamaForCausalLM' object has no attribute 'max_seq_length'


local "unsloth/llama-3-8b-bnb-4bit" directory

local_dir = "../local_model"
tokenizer = AutoTokenizer.from_pretrained(
pretrained_model_name_or_path=local_dir,
low_cpu_mem_usage=True,
quantization_config=quantization_config,
)
model = AutoModelForCausalLM.from_pretrained(
pretrained_model_name_or_path=local_dir,
low_cpu_mem_usage=True,
quantization_config=quantization_config,
)

set max_seq_length

model.config.max_seq_length = max_seq_length

....

Help me, how can I solve this problem? Thank you very much for your help。

Unsloth AI org

Traceback (most recent call last):
File ".\unsloth\fine-tuning-use-local-model.py", line 87, in
model = FastLanguageModel.get_peft_model(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File ".\unsloth\unsloth\models\llama.py", line 1632, in get_peft_model
assert max_seq_length <= model.max_seq_length
^^^^^^^^^^^^^^^^^^^^
File ".\torch\nn\modules\module.py", line 1688, in getattr
raise AttributeError(f"'{type(self).name}' object has no attribute '{name}'")
AttributeError: 'LlamaForCausalLM' object has no attribute 'max_seq_length'


local "unsloth/llama-3-8b-bnb-4bit" directory

local_dir = "../local_model"
tokenizer = AutoTokenizer.from_pretrained(
pretrained_model_name_or_path=local_dir,
low_cpu_mem_usage=True,
quantization_config=quantization_config,
)
model = AutoModelForCausalLM.from_pretrained(
pretrained_model_name_or_path=local_dir,
low_cpu_mem_usage=True,
quantization_config=quantization_config,
)

set max_seq_length

model.config.max_seq_length = max_seq_length

....

Help me, how can I solve this problem? Thank you very much for your help。

Hi there sorry for the extremely late reply. Are you still having the problem?

I have the same problem. Do you have the solution now? If yes, please share it with me.

Unsloth AI org

I have the same problem. Do you have the solution now? If yes, please share it with me.

According to someone in our discord server this could be the issue ' it was because of device mapping model was mapped with CPU and Tokenizer was mapped with CUDA ( GPU )'

Sign up or log in to comment