ProBench: Judging Multimodal Foundation Models on Open-ended Multi-domain Expert Tasks
Abstract
Solving expert-level multimodal tasks is a key milestone towards general intelligence. As the capabilities of multimodal large language models (MLLMs) continue to improve, evaluation of such advanced multimodal intelligence becomes necessary yet challenging. In this work, we introduce ProBench, a benchmark of open-ended user queries that require professional expertise and advanced reasoning. ProBench consists of 4,000 high-quality samples independently submitted by professionals based on their daily productivity demands. It spans across 10 fields and 56 sub-<PRE_TAG>fields</POST_TAG>, including science, arts, humanities, coding, mathematics, and creative writing. Experimentally, we evaluate and compare 24 latest models using MLLM-as-a-Judge. Our results reveal that although the best open-source models rival the proprietary ones, ProBench presents significant challenges in visual perception, textual understanding, domain knowledge and advanced reasoning, thus providing valuable directions for future multimodal AI research efforts.
Community
๐ project page: https://yan98.github.io/ProBench/
๐ป github: https://github.com/Yan98/ProBench_eval
๐ค dataset: https://huggingface.co./datasets/HelloKKMe/ProBench
๐ arxiv: https://arxiv.org/abs/2503.06885
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- Multimodal RewardBench: Holistic Evaluation of Reward Models for Vision Language Models (2025)
- Position: Multimodal Large Language Models Can Significantly Advance Scientific Reasoning (2025)
- MM-IQ: Benchmarking Human-Like Abstraction and Reasoning in Multimodal Models (2025)
- FaceXBench: Evaluating Multimodal LLMs on Face Understanding (2025)
- Imagine while Reasoning in Space: Multimodal Visualization-of-Thought (2025)
- EmoBench-M: Benchmarking Emotional Intelligence for Multimodal Large Language Models (2025)
- EmbodiedBench: Comprehensive Benchmarking Multi-modal Large Language Models for Vision-Driven Embodied Agents (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper