unable download
always get“The requested URL returned error: 403” in my terminal
i don't know why
kind of same problem
OSError: We couldn't connect to 'https://huggingface.co.' to load this file, couldn't find it in the cached files and it looks like meta-llama/Meta-Llama-3-8B-Instruct is not the path to a directory containing a file named config.json.
and Internet connection is good (curl -I https://huggingface.co. is fine)
same
Facing same issue .Is there any solution
I also encountered the same problem, after getting the link again or prompt 403: Forbidden, how to solve this problem?
+1 Same issue here...
it is more easier to download the model from the official website https://llama.meta.com/llama-downloads/ . It would be the original pt model. There is a way to convert the original model to hf models. Just use the script 'convert_llama_weights_to_hf.py' in the newest transformers. It must be the newest version transformers, because for older version, the 'convert_llama_weights_to_hf.py' only supports llama or llama2
does anyone got their error resolved? @JiayinWang , @luckyrain67777
A less than ideal solution is to use the path to the snapshot, i.e. $HF_HOME/models--meta-llama--Meta-Llama-3-8B-Instruct/snapshots/e5e23bbe8e749ef0efcf16cad411a7d23bd23298
Hi there! Please make sure to read https://huggingface.co./meta-llama/Meta-Llama-3-8B-Instruct/discussions/130
If anyone is having fine grained access token, make sure to add the repo in the "Repositories permissions" session.
@litax thx bro
Hi there! Please make sure to read https://huggingface.co./meta-llama/Meta-Llama-3-8B-Instruct/discussions/130
I have been granted the model access , but I still got the error: OSError: We couldn't connect to 'https://huggingface.co.' to load this file, couldn't find it in the cached files and it looks like meta-llama/Meta-Llama-3-8B-Instruct is not the path to a directory containing a file named config.json.
Checkout your internet connection or see how to run the library in offline mode at 'https://huggingface.co./docs/transformers/installation#offline-mode'.
i have same issue
Also having the same issue
I am having the same issue
Any solution?
Using HF token with "WRITE" permission instead of "FINEGRAINED" one solved my issue. Got the solution from https://discuss.huggingface.co/t/loading-llama-3/90492. Hope this helps.
You will have to edit the token access in your settings to let it have "WRITE" permissions.
You will have to edit the token access in your settings to let it have "WRITE" permissions.
This solved my problem, thanks
If anyone is having fine grained access token, make sure to add the repo in the "Repositories permissions" session.
The perfect solution for this issue ❤️.
If anyone is having fine grained access token, make sure to add the repo in the "Repositories permissions" session.
Yes. This is the solution. Thanks
Using HF token with "WRITE" permission instead of "FINEGRAINED" one solved my issue. Got the solution from https://discuss.huggingface.co/t/loading-llama-3/90492. Hope this helps.
It works~~
This solved the issue:
- request access to llama repo from hugging face
- generate access token and toggle "Read access to contents of all public gated repos you can access" in the access token permission
- login using access token in the python script
from huggingface_hub import login
login(token = <your_access_token>)
Using HF token with "WRITE" permission instead of "FINEGRAINED" one solved my issue. Got the solution from https://discuss.huggingface.co/t/loading-llama-3/90492. Hope this helps.
It works~~
This works