Model not showing up on Voting panel after Submitting

#919
by alvations - opened

Hi HF devs, Open LLM leaderboard maintainers and other folks on the discussion board,

I've submitted quite a few models to eval but somehow they don't show up on the voting panel. I tried resubmitting them in the Submit panel but it's showing "already voted for the model".

Is there anything else that I can do to make sure the models are considered for the leaderboard? Otherwise, I guess I'll just have to be patient and wait a couple of days to check back =)

Thank you in advance for the answers to the questions!

Regards,
Liling

Hi,

I am also facing the same issue. Can you let us know how do solve this problem.

Thank

Open LLM Leaderboard org

Hi @alvations and @olabs-ai ,

Thank you for submitting your models and for your patience!

When you submit a model, your vote is automatically recorded. According to our FAQ, please do not resubmit your model. Instead, please open a discussion and include a link to the request file for your model, so we will check its status. You can find a request file here in the Requests dataset.

As for the voting system, the more votes a model has, the faster it will be sent for evaluation. In the last few days our research cluster has been more or less empty, so all submitted models went from Pending to Running almost immediately, and you can only vote for models that are in Pending.

To help you now, could you both provide me the request files for your submitted models? I'd be happy to check their status and how the evaluation is going

Hi Team,

Thanks for the prompt response. Details as follows

  1. Base Model - unsloth/Meta-Llama-3.1-8B
  2. Adapter - olabs-ai/reflection_model

Thanks :)

Open LLM Leaderboard org

@olabs-ai Please, send me the request file from the Requests dataset

Hi @alozowski

Thank you for the response and helping to look into my submissions. I think I've tried to submit these models.

Rakuten/RakutenAI-7B
Rakuten/RakutenAI-7B-instruct
Rakuten/RakutenAI-7B-chat

Unbabel/TowerInstruct-13B-v0.1
Unbabel/TowerInstruct-Mistral-7B-v0.2
Unbabel/TowerInstruct-7B-v0.2
Unbabel/TowerBase-7B-v0.1
Unbabel/TowerBase-13B-v0.1
Unbabel/TowerInstruct-7B-v0.1

google/mt5-base
google/mt5-large
google/umt5-small
google/umt5-xl
google/umt5-xxl
google/umt5-base
google/mt5-small
google/mt5-xl
google/mt5-xxl
google/ul2

Open LLM Leaderboard org

@alvations , as I wrote above, please provide me the request files for all these models you listed. You can find the request files in the Requests dataset

Thanks @alozowskiforthe link to the requests page!

They're all showing "RUNNING" as status. I guess there's no need to vote for them that's why it isn't populated in the Vote panel. I'll wait for them to finish patiently then =)

They're in:

And a whole lot from https://huggingface.co./datasets/open-llm-leaderboard/requests/tree/main/google

Thank you again for the response and clarification!

Thanks.

I am able to see them in the list
alozowski
HF staff
Upload /olabs-ai/reflection_model_eval_request_False_4bit_Adapter.json with huggingface_hub
180589d
verified
about 9 hours ago
reflection_model_eval_request_False_4bit_Adapter.json
516 Bytes
Upload /olabs-ai/reflection_model_eval_request_False_4bit_Adapter.json with huggingface_hub
about 9 hours ago
reflection_model_eval_request_False_float16_Adapter.json
519 Bytes
Upload /olabs-ai/reflection_model_eval_request_False_float16_Adapter.json with huggingface_hub
about 23 hours ago

Lets see how the results are :)

Hi @alozowski

How do i track if the evaluation is done?

Open LLM Leaderboard org

Hi @olabs-ai ,

I can't track models by git comments. Please, find your models' request files in the Requests dataset here. If you have any questions about their status, please feel free to open a new discussion with links to the request files and tag me as this one is becoming difficult to keep up

alozowski changed discussion status to closed

Sign up or log in to comment