Dataset Viewer
Full Screen Viewer
Full Screen
The dataset could not be loaded because the splits use different data file formats, which is not supported. Read more about the splits configuration. Click for more details.
Couldn't infer the same data file format for all splits. Got {NamedSplit('train'): ('arrow', {}), NamedSplit('validation'): ('json', {})}
Error code: FileFormatMismatchBetweenSplitsError
Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
Dataset Card for Natural Questions
Dataset Summary
The NQ corpus contains questions from real users, and it requires QA systems to read and comprehend an entire Wikipedia article that may or may not contain the answer to the question. The inclusion of real user questions, and the requirement that solutions should read an entire page to find the answer, cause NQ to be a more realistic and challenging task than prior QA datasets.
Supported Tasks and Leaderboards
Languages
Dataset Structure
Data Instances
default
- Size of downloaded dataset files: 42981 MB
- Size of the generated dataset: 139706 MB
- Total amount of disk used: 182687 MB
An example of 'train' looks as follows.
Data Fields
The data fields are the same among all splits.
default
"id": datasets.Value("string"),
"document": {
"title": datasets.Value("string"),
"url": datasets.Value("string"),
"html": datasets.Value("string"),
"tokens": datasets.features.Sequence(
{
"token": datasets.Value("string"),
"is_html": datasets.Value("bool"),
"start_byte": datasets.Value("int64"),
"end_byte": datasets.Value("int64"),
}
),
},
"question": {
"text": datasets.Value("string"),
"tokens": datasets.features.Sequence(datasets.Value("string")),
},
"long_answer_candidates": datasets.features.Sequence(
{
"start_token": datasets.Value("int64"),
"end_token": datasets.Value("int64"),
"start_byte": datasets.Value("int64"),
"end_byte": datasets.Value("int64"),
"top_level": datasets.Value("bool"),
}
),
"annotations": datasets.features.Sequence(
{
"id": datasets.Value("string"),
"long_answer": {
"start_token": datasets.Value("int64"),
"end_token": datasets.Value("int64"),
"start_byte": datasets.Value("int64"),
"end_byte": datasets.Value("int64"),
"candidate_index": datasets.Value("int64")
},
"short_answers": datasets.features.Sequence(
{
"start_token": datasets.Value("int64"),
"end_token": datasets.Value("int64"),
"start_byte": datasets.Value("int64"),
"end_byte": datasets.Value("int64"),
"text": datasets.Value("string"),
}
),
"yes_no_answer": datasets.features.ClassLabel(
names=["NO", "YES"]
), # Can also be -1 for NONE.
}
)
Data Splits
name | train | validation |
---|---|---|
default | 307373 | 7830 |
dev | N/A | 7830 |
Dataset Creation
Curation Rationale
Source Data
Initial Data Collection and Normalization
Who are the source language producers?
Annotations
Annotation process
Who are the annotators?
Personal and Sensitive Information
Considerations for Using the Data
Social Impact of Dataset
Discussion of Biases
Other Known Limitations
Additional Information
Dataset Curators
Licensing Information
Creative Commons Attribution-ShareAlike 3.0 Unported.
Citation Information
@article{47761,
title = {Natural Questions: a Benchmark for Question Answering Research},
author = {Tom Kwiatkowski and Jennimaria Palomaki and Olivia Redfield and Michael Collins and Ankur Parikh and Chris Alberti and Danielle Epstein and Illia Polosukhin and Matthew Kelcey and Jacob Devlin and Kenton Lee and Kristina N. Toutanova and Llion Jones and Ming-Wei Chang and Andrew Dai and Jakob Uszkoreit and Quoc Le and Slav Petrov},
year = {2019},
journal = {Transactions of the Association of Computational Linguistics}
}
Contributions
- Downloads last month
- 75