Need urgent help with connecting with model, was working previously

#145
by amit15gupta - opened

model_name = "meta-llama/Meta-Llama-3-8B-Instruct"
access_token =
tokenizer = AutoTokenizer.from_pretrained(model_name, token =access_token )

#Trying the Llama-3-8B parameters
llm = VLLM(
model=model_name,
trust_remote_code=True, # mandatory for hf models
max_new_tokens=4096,
top_k=-1,
top_p=0.8,
temperature=0.5,
gpu_memory_utilization=0.8,
use_cache=False,
dtype ="bfloat16"
)
Hi all, My above code was working before but had started throwing an error since yesterday. I had cleaned my cache so re-generated and added the new token but didnt help. I am getting below error :

Can someone help me urgently ? I need to a finish a project over this weekend.

OSError: You are trying to access a gated repo.
Make sure to have access to it at https://huggingface.co./meta-llama/Meta-Llama-3-8B-Instruct.
401 Client Error. (Request ID: Root=1-666d54c2-60c680d803c1849a337ee390;bd0d364b-7d75-4552-abe0-6a982028776c)

Cannot access gated repo for url https://huggingface.co./meta-llama/Meta-Llama-3-8B-Instruct/resolve/main/config.json.
Access to model meta-llama/Meta-Llama-3-8B-Instruct is restricted. You must be authenticated to access it.

amit15gupta changed discussion status to closed
amit15gupta changed discussion status to open

i am getting the same error if you fix this help me

Hello there, you must use HuggingFace login token to access the models onwards.

Sign up or log in to comment