runtime error

Exit code: 1. Reason: -attn Building wheel for flash-attn (setup.py): started Building wheel for flash-attn (setup.py): finished with status 'done' Created wheel for flash-attn: filename=flash_attn-2.6.3-py3-none-any.whl size=187309225 sha256=237ef9c6157db394e1ddde4ba609a21ebb98382377a27041edc09318801a6f24 Stored in directory: /home/user/.cache/pip/wheels/7e/e3/c3/89c7a2f3c4adc07cd1c675f8bb7b9ad4d18f64a72bccdfe826 Successfully built flash-attn Installing collected packages: einops, flash-attn Successfully installed einops-0.8.0 flash-attn-2.6.3 [notice] A new release of pip is available: 24.2 -> 24.3.1 [notice] To update, run: /usr/local/bin/python3.10 -m pip install --upgrade pip Loading CLIP Loading VLM's custom vision model Loading tokenizer Loading LLM: Orenguteng/Llama-3.1-8B-Lexi-Uncensored-V2 Downloading shards: 0%| | 0/4 [00:00<?, ?it/s] Downloading shards: 25%|██▌ | 1/4 [00:10<00:32, 10.90s/it] Downloading shards: 50%|█████ | 2/4 [00:21<00:21, 10.96s/it] Downloading shards: 75%|███████▌ | 3/4 [00:34<00:11, 11.58s/it] Downloading shards: 100%|██████████| 4/4 [00:38<00:00, 8.53s/it] Downloading shards: 100%|██████████| 4/4 [00:38<00:00, 9.52s/it] Loading checkpoint shards: 0%| | 0/4 [00:00<?, ?it/s] Loading checkpoint shards: 100%|██████████| 4/4 [00:00<00:00, 4.96it/s] Loading VLM's custom text model The installed version of bitsandbytes was compiled without GPU support. 8-bit optimizers, 8-bit multiplication, and GPU quantization are unavailable. Loading image adapter pixtral_model: <class 'NoneType'> pixtral_processor: <class 'NoneType'> Traceback (most recent call last): File "/home/user/app/app.py", line 3, in <module> from joycaption import stream_chat_mod, get_text_model, change_text_model, get_repo_gguf File "/home/user/app/joycaption.py", line 237, in <module> @spaces.GPU() TypeError: spaces.GPU() missing 1 required positional argument: 'func'

Container logs:

Fetching error logs...