Retrieval Models Aren't Tool-Savvy: Benchmarking Tool Retrieval for Large Language Models
Abstract
Tool learning aims to augment large language models (LLMs) with diverse tools, enabling them to act as agents for solving practical tasks. Due to the limited context length of tool-using LLMs, adopting information retrieval (IR) models to select useful tools from large toolsets is a critical initial step. However, the performance of IR models in tool retrieval tasks remains underexplored and unclear. Most tool-use benchmarks simplify this step by manually pre-annotating a small set of relevant tools for each task, which is far from the real-world scenarios. In this paper, we propose ToolRet, a heterogeneous tool retrieval benchmark comprising 7.6k diverse retrieval tasks, and a corpus of 43k tools, collected from existing datasets. We benchmark six types of models on ToolRet. Surprisingly, even the models with strong performance in conventional IR benchmarks, exhibit poor performance on ToolRet. This low retrieval quality degrades the task pass rate of tool-use LLMs. As a further step, we contribute a large-scale training dataset with over 200k instances, which substantially optimizes the tool retrieval ability of IR models.
Community
Tool learning aims to augment large language models (LLMs) with diverse
tools, enabling them to act as agents for solving practical tasks. Due to the
limited context length of tool-using LLMs, adopting information retrieval (IR)
models to select useful tools from large toolsets is a critical initial step.
However, the performance of IR models in tool retrieval tasks remains
underexplored and unclear. Most tool-use benchmarks simplify this step by
manually pre-annotating a small set of relevant tools for each task, which is
far from the real-world scenarios. In this paper, we propose ToolRet, a
heterogeneous tool retrieval benchmark comprising 7.6k diverse retrieval tasks,
and a corpus of 43k tools, collected from existing datasets. We benchmark six
types of models on ToolRet. Surprisingly, even the models with strong
performance in conventional IR benchmarks, exhibit poor performance on ToolRet.
This low retrieval quality degrades the task pass rate of tool-use LLMs. As a
further step, we contribute a large-scale training dataset with over 200k
instances, which substantially optimizes the tool retrieval ability of IR
models.
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- MMTEB: Massive Multilingual Text Embedding Benchmark (2025)
- One ruler to measure them all: Benchmarking multilingual long-context language models (2025)
- KEIR @ ECIR 2025: The Second Workshop on Knowledge-Enhanced Information Retrieval (2025)
- Training Large Language Models to be Better Rule Followers (2025)
- Tool Unlearning for Tool-Augmented LLMs (2025)
- Making Language Models Robust Against Negation (2025)
- Boosting Tool Use of Large Language Models via Iterative Reinforced Fine-Tuning (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper