Typed-RAG: Type-aware Multi-Aspect Decomposition for Non-Factoid Question Answering
Abstract
Non-factoid question-answering (NFQA) poses a significant challenge due to its open-ended nature, diverse intents, and the need for multi-aspect reasoning, which renders conventional factoid QA approaches, including retrieval-augmented generation (RAG), inadequate. Unlike factoid questions, non-factoid questions (NFQs) lack definitive answers and require synthesizing information from multiple sources across various reasoning dimensions. To address these limitations, we introduce Typed-RAG, a type-aware multi-aspect decomposition framework within the RAG paradigm for NFQA. Typed-RAG classifies NFQs into distinct types -- such as debate, experience, and comparison -- and applies aspect-based decomposition to refine retrieval and generation strategies. By decomposing multi-aspect NFQs into single-aspect sub-queries and aggregating the results, Typed-RAG generates more informative and contextually relevant responses. To evaluate Typed-RAG, we introduce Wiki-NFQA, a benchmark dataset covering diverse NFQ types. Experimental results demonstrate that Typed-RAG outperforms baselines, thereby highlighting the importance of type-aware decomposition for effective retrieval and generation in NFQA. Our code and dataset are available at https://github.com/TeamNLP/Typed-RAG{https://github.com/TeamNLP/Typed-RAG}.
Community
We introduce Typed-RAG, a novel framework enhancing Retrieval-Augmented Generation (RAG) for Non-Factoid Question Answering (NFQA). Our key contributions include:
- Typed-RAG: A type-aware method integrating question classification and multi-aspect decomposition to improve RAG's handling of diverse, complex NFQs. This tailors retrieval and generation strategies for different question types.
- Wiki-NFQA Dataset & Validation: We introduce a new benchmark dataset (Wiki-NFQA) for NFQA and demonstrate that Typed-RAG significantly outperforms baseline models in generating high-quality answers.
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- Vendi-RAG: Adaptively Trading-Off Diversity And Quality Significantly Improves Retrieval Augmented Generation With LLMs (2025)
- MEBench: Benchmarking Large Language Models for Cross-Document Multi-Entity Question Answering (2025)
- Optimizing open-domain question answering with graph-based retrieval augmented generation (2025)
- SRAG: Structured Retrieval-Augmented Generation for Multi-Entity Question Answering over Wikipedia Graph (2025)
- RGAR: Recurrence Generation-augmented Retrieval for Factual-aware Medical Question Answering (2025)
- DeFine: A Decomposed and Fine-Grained Annotated Dataset for Long-form Article Generation (2025)
- Federated Retrieval Augmented Generation for Multi-Product Question Answering (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 1
Spaces citing this paper 0
No Space linking this paper