Flash dependency (locks out non-NVIDIA GPUs)
Title says it all. This should run on a Mac M1 architecture (some have VRAM > 98, so can run this). However flash_attn is called on repeatedly, and hard to re-code without it.
The code is equivalent to the standard mistral 7b code other than the MoE integration which does not use attention. Flash attention should only be used when loading the model with use_flash_attn_2=True
, otherwise it should be good. Have you tried it?
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("~/LLM/mixtral-8x7b-32kseqlen", low_cpu_mem_usage=True, device_map="auto", trust_remote_code=True)
Traceback (most recent call last):
File "", line 1, in
File "/miniconda3/envs/textgen/lib/python3.11/site-packages/transformers/models/auto/auto_factory.py", line 553, in from_pretrained/miniconda3/envs/textgen/lib/python3.11/site-packages/transformers/dynamic_module_utils.py", line 487, in get_class_from_dynamic_module
model_class = get_class_from_dynamic_module(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "
final_module = get_cached_module_file(
^^^^^^^^^^^^^^^^^^^^^^^
File "/miniconda3/envs/textgen/lib/python3.11/site-packages/transformers/dynamic_module_utils.py", line 314, in get_cached_module_file/miniconda3/envs/textgen/lib/python3.11/site-packages/transformers/dynamic_module_utils.py", line 179, in check_imports
modules_needed = check_imports(resolved_module_file)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "
raise ImportError(
ImportError: This modeling file requires the following packages that were not found in your environment: flash_attn. Run pip install flash_attn
Not sure why it keeps trying to get flash attention - this hasn't been a problem with other models.
Title says it all. This should run on a Mac M1 architecture (some have VRAM > 98, so can run this). However flash_attn is called on repeatedly, and hard to re-code without it.
Some have VRAM > 98 GB for CPU or GPU?