LLM as a Broken Telephone: Iterative Generation Distorts Information
Abstract
As large language models are increasingly responsible for online content, concerns arise about the impact of repeatedly processing their own outputs. Inspired by the "broken telephone" effect in chained human communication, this study investigates whether LLMs similarly distort information through iterative generation. Through translation-based experiments, we find that distortion accumulates over time, influenced by language choice and chain complexity. While degradation is inevitable, it can be mitigated through strategic prompting techniques. These findings contribute to discussions on the long-term effects of AI-mediated information propagation, raising important questions about the reliability of LLM-generated content in iterative workflows.
Community
Inspired by the "broken telephone" effect in chained human communication, this study investigates whether LLMs similarly distort information through iterative generation.
Code implementation available at: https://github.com/amr-mohamedd/LLM-as-a-Broken-Telephone
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- Machine-generated text detection prevents language model collapse (2025)
- Unsupervised Translation of Emergent Communication (2025)
- Exploring Translation Mechanism of Large Language Models (2025)
- Can xLLMs Understand the Structure of Dialog? Exploring Multilingual Response Generation in Complex Scenarios (2025)
- DeepThink: Aligning Language Models with Domain-Specific User Intents (2025)
- LongEval: A Comprehensive Analysis of Long-Text Generation Through a Plan-based Paradigm (2025)
- Do Multilingual LLMs Think In English? (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper