runtime error

Exit code: 1. Reason: /3.10.15/lib/python3.10/site-packages/huggingface_hub/file_download.py", line 1666, in get_hf_file_metadata r = _request_wrapper( File "/home/user/.pyenv/versions/3.10.15/lib/python3.10/site-packages/huggingface_hub/file_download.py", line 364, in _request_wrapper response = _request_wrapper( File "/home/user/.pyenv/versions/3.10.15/lib/python3.10/site-packages/huggingface_hub/file_download.py", line 388, in _request_wrapper hf_raise_for_status(response) File "/home/user/.pyenv/versions/3.10.15/lib/python3.10/site-packages/huggingface_hub/utils/_http.py", line 454, in hf_raise_for_status raise _format(RepositoryNotFoundError, message, response) from e huggingface_hub.errors.RepositoryNotFoundError: 401 Client Error. (Request ID: Root=1-67115ff1-1550eee7136f4cf75e4286fb;70dd655d-c3a4-4e9e-89e4-55e756467f37) Repository Not Found for url: https://huggingface.co./microsoft/Phi-3-mini-4k-instruct/resolve/main/config.json. Please make sure you specified the correct `repo_id` and `repo_type`. If you are trying to access a private or gated repo, make sure you are authenticated. User Access Token "spaces" is expired The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/home/user/app/app.py", line 29, in <module> model = AutoModelForCausalLM.from_pretrained(args.model, torch_dtype=torch.float16, File "/home/user/.pyenv/versions/3.10.15/lib/python3.10/site-packages/transformers/models/auto/auto_factory.py", line 487, in from_pretrained resolved_config_file = cached_file( File "/home/user/.pyenv/versions/3.10.15/lib/python3.10/site-packages/transformers/utils/hub.py", line 426, in cached_file raise EnvironmentError( OSError: microsoft/Phi-3-mini-4k-instruct is not a local folder and is not a valid model identifier listed on 'https://huggingface.co./models' If this is a private repository, make sure to pass a token having permission to this repo either by logging in with `huggingface-cli login` or by passing `token=<your_token>`

Container logs:

Fetching error logs...