Papers
arxiv:2503.15879

Typed-RAG: Type-aware Multi-Aspect Decomposition for Non-Factoid Question Answering

Published on Mar 20
· Submitted by oneonlee on Mar 25
Authors:
,
,

Abstract

Non-factoid question-answering (NFQA) poses a significant challenge due to its open-ended nature, diverse intents, and the need for multi-aspect reasoning, which renders conventional factoid QA approaches, including retrieval-augmented generation (RAG), inadequate. Unlike factoid questions, non-factoid questions (NFQs) lack definitive answers and require synthesizing information from multiple sources across various reasoning dimensions. To address these limitations, we introduce Typed-RAG, a type-aware multi-aspect decomposition framework within the RAG paradigm for NFQA. Typed-RAG classifies NFQs into distinct types -- such as debate, experience, and comparison -- and applies aspect-based decomposition to refine retrieval and generation strategies. By decomposing multi-aspect NFQs into single-aspect sub-queries and aggregating the results, Typed-RAG generates more informative and contextually relevant responses. To evaluate Typed-RAG, we introduce Wiki-NFQA, a benchmark dataset covering diverse NFQ types. Experimental results demonstrate that Typed-RAG outperforms baselines, thereby highlighting the importance of type-aware decomposition for effective retrieval and generation in NFQA. Our code and dataset are available at https://github.com/TeamNLP/Typed-RAG{https://github.com/TeamNLP/Typed-RAG}.

Community

Paper author Paper submitter

We introduce Typed-RAG, a novel framework enhancing Retrieval-Augmented Generation (RAG) for Non-Factoid Question Answering (NFQA). Our key contributions include:

  • Typed-RAG: A type-aware method integrating question classification and multi-aspect decomposition to improve RAG's handling of diverse, complex NFQs. This tailors retrieval and generation strategies for different question types.
  • Wiki-NFQA Dataset & Validation: We introduce a new benchmark dataset (Wiki-NFQA) for NFQA and demonstrate that Typed-RAG significantly outperforms baseline models in generating high-quality answers.
Your need to confirm your account before you can post a new comment.

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2503.15879 in a model README.md to link it from this page.

Datasets citing this paper 1

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2503.15879 in a Space README.md to link it from this page.

Collections including this paper 1