Regarding evaluation code version.

#58
by bedio - opened

Hello, which version of lm_eval is used for leaderboard evaluations? My model outperforms the baseline locally, but the baseline surpasses mine on the leaderboard

Open LLM Leaderboard org
edited Sep 17

Hi @bedio ,

Please, do not create any discussions in the Requests dataset here unless you are renaming a model. We don't follow them here and we can miss it. Instead, please, create a discussion here in the Community section of the Leaderboard:
https://huggingface.co./spaces/open-llm-leaderboard/open_llm_leaderboard/discussions

Regarding your question, we use this our fork of lm_eval:
https://github.com/huggingface/lm-evaluation-harness/tree/adding_all_changess

You can find more info in the Reproducibility section of our documentation:
https://huggingface.co./docs/leaderboards/open_llm_leaderboard/about#reproducibility

alozowski changed discussion status to closed

Sign up or log in to comment