The dataset is currently empty. Upload or create new data files. Then, you will be able to explore them in the Dataset Viewer.
JDERW: A Benchmark for Evaluating World Models in Large Language Models (LLMs)
Overview
JDERW (Japanese Dataset for Evaluating Reasoning with World Models) is a benchmark dataset designed to assess the ability of Large Language Models (LLMs) to understand and reason about real-world phenomena and common sense. It includes 103 questions categorized into six reasoning types:
- Causal Reasoning (e.g., Why does it snow in winter?)
- Temporal Reasoning (e.g., What happens when you leave a hot coffee out?)
- Spatial Reasoning (e.g., What happens to a ball placed on a slope?)
- Abstract Concept Reasoning (e.g., What is happiness?)
- Common Sense Reasoning (e.g., How should you cross the road?)
- Planning Reasoning (e.g., How do you make curry?)
This dataset enables a detailed evaluation of LLMs’ strengths and weaknesses in world model comprehension, paving the way for improvements in model development.
Dataset Structure
Each sample in JDERW consists of:
- situation: Context or scenario setting
- question: The question to be answered
- answer: A reference correct answer
- reasoning: Explanation for the answer
- eval aspect: Evaluation criteria
- genre: The type of reasoning involved
Usage
To use the JDERW dataset for inference, you can utilize the provided script. Below is an example usage with a Hugging Face model.
Installation
Ensure you have the required dependencies installed:
pip install torch datasets transformers
Running Inference
import torch
from datasets import load_dataset
from transformers import AutoModelForCausalLM, AutoTokenizer
def main(model_name):
ds = load_dataset("DeL-TaiseiOzaki/JDERW")
tokenizer = AutoTokenizer.from_pretrained(model_name, trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained(model_name).eval()
def pred(example):
situation = example["situation"]
question = example["question"]
prompt = f"{situation}\n{question}"
response, _ = model.chat(tokenizer, prompt, history=None)
example[model_name] = response
return example
ds = ds.map(pred, batched=False)
ds.to_csv(f"{model_name.replace('/', '-')}.csv", index=False)
if __name__ == "__main__":
main("<HuggingFace Model ID>")
Replace <HuggingFace Model ID>
with the ID of the model you wish to use.
Benchmarking Results
JDERW has been used to evaluate various LLMs, and the results show distinct strengths and weaknesses across different reasoning categories. Some key findings include:
- Llama-3.3-70B-Instruct excels in temporal and abstract reasoning.
- GPT-4o and Claude-3-5-Sonnet perform well in planning and common sense reasoning.
- Most models struggle with abstract concept reasoning.
Model | Causal | Spatial | Temporal | Planning | Common Sense | Abstract Concept |
---|---|---|---|---|---|---|
Llama-3.3-70B-Instruct | 4.032 | 3.914 | 4.214 | 3.867 | 4.057 | 3.667 |
GPT-4o | 3.903 | 4.114 | 4.071 | 4.200 | 3.857 | 2.667 |
Claude-3-5-Sonnet | 4.000 | 3.743 | 3.857 | 4.000 | 4.000 | 3.333 |
These findings highlight the importance of evaluating LLMs beyond simple accuracy metrics to understand how well they internalize world models.
Future Directions
- Expanding the dataset: Increasing the number of questions to cover more diverse real-world scenarios.
- Human comparison: Comparing LLM performance with human responses to better assess gaps in world modeling.
- Exploring new categories: Investigating additional reasoning dimensions beyond the six currently defined.
- Improving evaluation metrics: Refining assessment criteria to provide deeper insights into LLM capabilities.
Citation
If you use JDERW in your research, please cite the following paper:
@article{JDERW2024,
author = {Taisei Ozaki and Takumi Matsushita and Tsuyoshi Miura},
title = {JDERW: A Benchmark for Evaluating World Models in Large Language Models},
journal = {arXiv preprint arXiv:XXXX.XXXX},
year = {2024}
}
Acknowledgments
This research is supported by Osaka Metropolitan University, Institute of Science Tokyo, and The University of Tokyo.
- Downloads last month
- 22