Global MMLU: Understanding and Addressing Cultural and Linguistic Biases in Multilingual Evaluation
Abstract
Cultural biases in multilingual datasets pose significant challenges for their effectiveness as global benchmarks. These biases stem not only from language but also from the cultural knowledge required to interpret questions, reducing the practical utility of translated datasets like MMLU. Furthermore, translation often introduces artifacts that can distort the meaning or clarity of questions in the target language. A common practice in multilingual evaluation is to rely on machine-translated evaluation sets, but simply translating a dataset is insufficient to address these challenges. In this work, we trace the impact of both of these issues on multilingual evaluations and ensuing model performances. Our large-scale evaluation of state-of-the-art open and proprietary models illustrates that progress on MMLU depends heavily on learning Western-centric concepts, with 28% of all questions requiring culturally sensitive knowledge. Moreover, for questions requiring geographic knowledge, an astounding 84.9% focus on either North American or European regions. Rankings of model evaluations change depending on whether they are evaluated on the full portion or the subset of questions annotated as culturally sensitive, showing the distortion to model rankings when blindly relying on translated MMLU. We release Global-MMLU, an improved MMLU with evaluation coverage across 42 languages -- with improved overall quality by engaging with compensated professional and community annotators to verify translation quality while also rigorously evaluating cultural biases present in the original dataset. This comprehensive Global-MMLU set also includes designated subsets labeled as culturally sensitive and culturally agnostic to allow for more holistic, complete evaluation.
Community
I'd love to see this paper here!
The main result is an open dataset: https://huggingface.co./datasets/CohereForAI/Global-MMLU
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- Towards Multilingual LLM Evaluation for European Languages (2024)
- Pangea: A Fully Open Multilingual Multimodal LLM for 39 Languages (2024)
- P-MMEval: A Parallel Multilingual Multitask Benchmark for Consistent Evaluation of LLMs (2024)
- MILU: A Multi-task Indic Language Understanding Benchmark (2024)
- Benchmarking Multimodal Models for Ukrainian Language Understanding Across Academic and Cultural Domains (2024)
- Uhura: A Benchmark for Evaluating Scientific Question Answering and Truthfulness in Low-Resource African Languages (2024)
- INCLUDE: Evaluating Multilingual Language Understanding with Regional Knowledge (2024)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper