Papers
arxiv:2502.01534

Preference Leakage: A Contamination Problem in LLM-as-a-judge

Published on Feb 3
· Submitted by wjldw on Feb 4
Authors:
,
,
,
,

Abstract

Large Language Models (LLMs) as judges and LLM-based data synthesis have emerged as two fundamental LLM-driven data annotation methods in model development. While their combination significantly enhances the efficiency of model training and evaluation, little attention has been given to the potential contamination brought by this new model development paradigm. In this work, we expose preference leakage, a contamination problem in LLM-as-a-judge caused by the relatedness between the synthetic data generators and LLM-based evaluators. To study this issue, we first define three common relatednesses between data generator LLM and judge LLM: being the same model, having an inheritance relationship, and belonging to the same model family. Through extensive experiments, we empirically confirm the bias of judges towards their related student models caused by preference leakage across multiple LLM baselines and benchmarks. Further analysis suggests that preference leakage is a pervasive issue that is harder to detect compared to previously identified biases in LLM-as-a-judge scenarios. All of these findings imply that preference leakage is a widespread and challenging problem in the area of LLM-as-a-judge. We release all codes and data at: https://github.com/David-Li0406/Preference-Leakage.

Community

Paper submitter

More resources on LLM-as-a-judge are on the website: https://llm-as-a-judge.github.io

We release code and data at: https://github.com/David-Li0406/Preference-Leakage

Interesting! I have messed around with using multiple judges from different "origins" and having them do a Tally-vote to come to a combined score for the quality of a generated response. I think that did help some, but obviously it requires way more compute so it wasn't really practical in any way (atleast not with the resources I have access to).

·

Exactly, multi-agent judges can be a potential solution to address this issue, but I believe something more important is to detect and prevent this preference leakage before judgment.

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2502.01534 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2502.01534 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2502.01534 in a Space README.md to link it from this page.

Collections including this paper 3