WritingBench: A Comprehensive Benchmark for Generative Writing
Abstract
Recent advancements in large language models (LLMs) have significantly enhanced text generation capabilities, yet evaluating their performance in generative writing remains a challenge. Existing benchmarks primarily focus on generic text generation or limited in writing tasks, failing to capture the diverse requirements of high-quality written contents across various domains. To bridge this gap, we present WritingBench, a comprehensive benchmark designed to evaluate LLMs across 6 core writing domains and 100 subdomains, encompassing creative, persuasive, informative, and technical writing. We further propose a query-dependent evaluation framework that empowers LLMs to dynamically generate instance-specific assessment criteria. This framework is complemented by a fine-tuned critic model for criteria-aware scoring, enabling evaluations in style, format and length. The framework's validity is further demonstrated by its data curation capability, which enables 7B-parameter models to approach state-of-the-art (SOTA) performance. We open-source the benchmark, along with evaluation tools and modular framework components, to advance the development of LLMs in writing.
Community
๐ข New Benchmark Release | WritingBench: A Comprehensive Framework for Evaluating Generative Writing
๐ [Paper] โข ๐ [Github Repo] โข ๐ [Critic Model] โข โ๏ธ [Writing Model]
๐ก Key Innovations
WritingBench is a comprehensive benchmark for evaluating LLMs' writing capabilities across 1,239 real-world queries, spanning:
- 6 primary domains
- 100 fine-grained subdomains
- 3 core writing requirements: Style / Format / Length
- 1,546 avg. tokens per query
WritingBench integrates diverse sources of materials. Each query is paired with 5 instance-specific criteria, scoring either through LLM evaluators or through a finetuned critic model.
๐ Try It Now: https://github.com/X-PLUG/WritingBench
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- WildLong: Synthesizing Realistic Long-Context Instruction Data at Scale (2025)
- Learning to Align Multi-Faceted Evaluation: A Unified and Robust Framework (2025)
- LongEval: A Comprehensive Analysis of Long-Text Generation Through a Plan-based Paradigm (2025)
- DeepThink: Aligning Language Models with Domain-Specific User Intents (2025)
- Shifting Long-Context LLMs Research from Input to Output (2025)
- MMSciBench: Benchmarking Language Models on Multimodal Scientific Problems (2025)
- A Cognitive Writing Perspective for Constrained Long-Form Text Generation (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 2
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper