Regarding evaluation code version.
Hello, which version of lm_eval is used for leaderboard evaluations? My model outperforms the baseline locally, but the baseline surpasses mine on the leaderboard
Hi @bedio ,
Please, do not create any discussions in the Requests dataset here unless you are renaming a model. We don't follow them here and we can miss it. Instead, please, create a discussion here in the Community section of the Leaderboard:
https://huggingface.co./spaces/open-llm-leaderboard/open_llm_leaderboard/discussions
Regarding your question, we use this our fork of lm_eval:
https://github.com/huggingface/lm-evaluation-harness/tree/adding_all_changess
You can find more info in the Reproducibility section of our documentation:
https://huggingface.co./docs/leaderboards/open_llm_leaderboard/about#reproducibility