id
stringlengths 2
115
| private
bool 1
class | tags
sequence | description
stringlengths 0
5.93k
⌀ | downloads
int64 0
1.14M
| likes
int64 0
1.79k
|
---|---|---|---|---|---|
jeopardy | false | [
"language:en"
] | Dataset containing 216,930 Jeopardy questions, answers and other data.
The json file is an unordered list of questions where each question has
'category' : the question category, e.g. "HISTORY"
'value' : integer $ value of the question as string, e.g. "200"
Note: This is "None" for Final Jeopardy! and Tiebreaker questions
'question' : text of question
Note: This sometimes contains hyperlinks and other things messy text such as when there's a picture or video question
'answer' : text of answer
'round' : one of "Jeopardy!","Double Jeopardy!","Final Jeopardy!" or "Tiebreaker"
Note: Tiebreaker questions do happen but they're very rare (like once every 20 years)
'show_number' : int of show number, e.g '4680'
'air_date' : string of the show air date in format YYYY-MM-DD | 301 | 2 |
jfleg | false | [
"task_categories:text2text-generation",
"annotations_creators:expert-generated",
"language_creators:found",
"multilinguality:monolingual",
"multilinguality:other-language-learner",
"size_categories:1K<n<10K",
"source_datasets:extended|other-GUG-grammaticality-judgements",
"language:en",
"license:cc-by-nc-sa-4.0",
"grammatical-error-correction"
] | JFLEG (JHU FLuency-Extended GUG) is an English grammatical error correction (GEC) corpus.
It is a gold standard benchmark for developing and evaluating GEC systems with respect to
fluency (extent to which a text is native-sounding) as well as grammaticality.
For each source document, there are four human-written corrections (ref0 to ref3). | 1,667 | 18 |
jigsaw_toxicity_pred | false | [
"task_categories:text-classification",
"task_ids:multi-label-classification",
"annotations_creators:crowdsourced",
"language_creators:other",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"source_datasets:original",
"language:en",
"license:cc0-1.0"
] | This dataset consists of a large number of Wikipedia comments which have been labeled by human raters for toxic behavior. | 1,343 | 9 |
jigsaw_unintended_bias | false | [
"task_categories:text-classification",
"task_ids:text-scoring",
"annotations_creators:crowdsourced",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:1M<n<10M",
"source_datasets:original",
"language:en",
"license:cc0-1.0",
"toxicity-prediction"
] | A collection of comments from the defunct Civil Comments platform that have been annotated for their toxicity. | 575 | 1 |
jnlpba | false | [
"task_categories:token-classification",
"task_ids:named-entity-recognition",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:extended|other-genia-v3.02",
"language:en",
"license:unknown"
] | The data came from the GENIA version 3.02 corpus (Kim et al., 2003). This was formed from a controlled search
on MEDLINE using the MeSH terms human, blood cells and transcription factors. From this search 2,000 abstracts
were selected and hand annotated according to a small taxonomy of 48 classes based on a chemical classification.
Among the classes, 36 terminal classes were used to annotate the GENIA corpus. | 471 | 5 |
journalists_questions | false | [
"task_categories:text-classification",
"annotations_creators:crowdsourced",
"language_creators:other",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:ar",
"license:unknown",
"question-identification"
] | \
The journalists_questions corpus (version 1.0) is a collection of 10K human-written Arabic
tweets manually labeled for question identification over Arabic tweets posted by journalists. | 269 | 0 |
kan_hope | false | [
"task_categories:text-classification",
"task_ids:multi-label-classification",
"annotations_creators:expert-generated",
"language_creators:crowdsourced",
"multilinguality:multilingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:en",
"language:kn",
"license:cc-by-4.0",
"hope-speech-detection",
"arxiv:2108.04616"
] | Numerous methods have been developed to monitor the spread of negativity in modern years by
eliminating vulgar, offensive, and fierce comments from social media platforms. However, there are relatively
lesser amounts of study that converges on embracing positivity, reinforcing supportive and reassuring content in online forums.
Consequently, we propose creating an English Kannada Hope speech dataset, KanHope and comparing several experiments to benchmark the dataset.
The dataset consists of 6,176 user generated comments in code mixed Kannada scraped from YouTube and manually annotated as bearing hope
speech or Not-hope speech.
This dataset was prepared for hope-speech text classification benchmark on code-mixed Kannada, an under-resourced language. | 269 | 1 |
kannada_news | false | [
"task_categories:text-classification",
"task_ids:topic-classification",
"annotations_creators:other",
"language_creators:other",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:kn",
"license:cc-by-sa-4.0"
] | The Kannada news dataset contains only the headlines of news article in three categories:
Entertainment, Tech, and Sports.
The data set contains around 6300 news article headlines which collected from Kannada news websites.
The data set has been cleaned and contains train and test set using which can be used to benchmark
classification models in Kannada. | 268 | 0 |
kd_conv | false | [
"task_categories:text-generation",
"task_categories:fill-mask",
"task_ids:dialogue-modeling",
"annotations_creators:crowdsourced",
"annotations_creators:machine-generated",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:zh",
"license:apache-2.0"
] | KdConv is a Chinese multi-domain Knowledge-driven Conversionsation dataset, grounding the topics in multi-turn conversations to knowledge graphs. KdConv contains 4.5K conversations from three domains (film, music, and travel), and 86K utterances with an average turn number of 19.0. These conversations contain in-depth discussions on related topics and natural transition between multiple topics, while the corpus can also used for exploration of transfer learning and domain adaptation.\ | 1,194 | 4 |
kde4 | false | [
"task_categories:translation",
"annotations_creators:found",
"language_creators:found",
"multilinguality:multilingual",
"size_categories:100K<n<1M",
"source_datasets:original",
"language:af",
"language:ar",
"language:as",
"language:ast",
"language:be",
"language:bg",
"language:bn",
"language:br",
"language:ca",
"language:crh",
"language:cs",
"language:csb",
"language:cy",
"language:da",
"language:de",
"language:el",
"language:en",
"language:eo",
"language:es",
"language:et",
"language:eu",
"language:fa",
"language:fi",
"language:fr",
"language:fy",
"language:ga",
"language:gl",
"language:gu",
"language:ha",
"language:he",
"language:hi",
"language:hne",
"language:hr",
"language:hsb",
"language:hu",
"language:hy",
"language:id",
"language:is",
"language:it",
"language:ja",
"language:ka",
"language:kk",
"language:km",
"language:kn",
"language:ko",
"language:ku",
"language:lb",
"language:lt",
"language:lv",
"language:mai",
"language:mk",
"language:ml",
"language:mr",
"language:ms",
"language:mt",
"language:nb",
"language:nds",
"language:ne",
"language:nl",
"language:nn",
"language:nso",
"language:oc",
"language:or",
"language:pa",
"language:pl",
"language:ps",
"language:pt",
"language:ro",
"language:ru",
"language:rw",
"language:se",
"language:si",
"language:sk",
"language:sl",
"language:sr",
"language:sv",
"language:ta",
"language:te",
"language:tg",
"language:th",
"language:tr",
"language:uk",
"language:uz",
"language:vi",
"language:wa",
"language:xh",
"language:zh",
"license:unknown"
] | A parallel corpus of KDE4 localization files (v.2).
92 languages, 4,099 bitexts
total number of files: 75,535
total number of tokens: 60.75M
total number of sentence fragments: 8.89M | 1,536 | 5 |
kelm | false | [
"task_categories:other",
"annotations_creators:found",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:1M<n<10M",
"source_datasets:original",
"language:en",
"license:cc-by-sa-3.0",
"data-to-text-generation",
"arxiv:2010.12688"
] | Data-To-Text Generation involves converting knowledge graph (KG) triples of the form (subject, relation, object) into
a natural language sentence(s). This dataset consists of English KG data converted into paired natural language text.
The generated corpus consists of ∼18M sentences spanning ∼45M triples with ∼1500 distinct relations. | 869 | 5 |
kilt_tasks | false | [
"task_categories:fill-mask",
"task_categories:question-answering",
"task_categories:text-classification",
"task_categories:text-generation",
"task_categories:text-retrieval",
"task_categories:text2text-generation",
"task_ids:abstractive-qa",
"task_ids:dialogue-modeling",
"task_ids:document-retrieval",
"task_ids:entity-linking-retrieval",
"task_ids:extractive-qa",
"task_ids:fact-checking",
"task_ids:fact-checking-retrieval",
"task_ids:open-domain-abstractive-qa",
"task_ids:open-domain-qa",
"task_ids:slot-filling",
"annotations_creators:crowdsourced",
"annotations_creators:found",
"annotations_creators:machine-generated",
"language_creators:crowdsourced",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"size_categories:10K<n<100K",
"size_categories:1K<n<10K",
"size_categories:1M<n<10M",
"source_datasets:extended|natural_questions",
"source_datasets:extended|other-aidayago",
"source_datasets:extended|other-fever",
"source_datasets:extended|other-hotpotqa",
"source_datasets:extended|other-trex",
"source_datasets:extended|other-triviaqa",
"source_datasets:extended|other-wizardsofwikipedia",
"source_datasets:extended|other-wned-cweb",
"source_datasets:extended|other-wned-wiki",
"source_datasets:extended|other-zero-shot-re",
"source_datasets:original",
"language:en",
"license:mit",
"arxiv:2009.02252"
] | KILT tasks training and evaluation data.
- [FEVER](https://fever.ai) | Fact Checking | fever
- [AIDA CoNLL-YAGO](https://www.mpi-inf.mpg.de/departments/databases-and-information-systems/research/ambiverse-nlu/aida/downloads) | Entity Linking | aidayago2
- [WNED-WIKI](https://github.com/U-Alberta/wned) | Entity Linking | wned
- [WNED-CWEB](https://github.com/U-Alberta/wned) | Entity Linking | cweb
- [T-REx](https://hadyelsahar.github.io/t-rex) | Slot Filling | trex
- [Zero-Shot RE](http://nlp.cs.washington.edu/zeroshot) | Slot Filling | structured_zeroshot
- [Natural Questions](https://ai.google.com/research/NaturalQuestions) | Open Domain QA | nq
- [HotpotQA](https://hotpotqa.github.io) | Open Domain QA | hotpotqa
- [TriviaQA](http://nlp.cs.washington.edu/triviaqa) | Open Domain QA | triviaqa
- [ELI5](https://facebookresearch.github.io/ELI5/explore.html) | Open Domain QA | eli5
- [Wizard of Wikipedia](https://parl.ai/projects/wizard_of_wikipedia) | Dialogue | wow
To finish linking TriviaQA questions to the IDs provided, follow the instructions [here](http://github.com/huggingface/datasets/datasets/kilt_tasks/README.md). | 36,930 | 15 |
kilt_wikipedia | false | [] | KILT-Wikipedia: Wikipedia pre-processed for KILT. | 360 | 7 |
kinnews_kirnews | false | [
"task_categories:text-classification",
"task_ids:multi-class-classification",
"task_ids:topic-classification",
"annotations_creators:expert-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:rn",
"language:rw",
"license:mit",
"arxiv:2010.12174"
] | Kinyarwanda and Kirundi news classification datasets | 664 | 0 |
klue | false | [
"task_categories:fill-mask",
"task_categories:question-answering",
"task_categories:text-classification",
"task_categories:text-generation",
"task_categories:token-classification",
"task_ids:extractive-qa",
"task_ids:named-entity-recognition",
"task_ids:natural-language-inference",
"task_ids:parsing",
"task_ids:semantic-similarity-scoring",
"task_ids:text-scoring",
"task_ids:topic-classification",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:ko",
"license:cc-by-sa-4.0",
"relation-extraction",
"arxiv:2105.09680"
] | KLUE (Korean Language Understanding Evaluation)
Korean Language Understanding Evaluation (KLUE) benchmark is a series of datasets to evaluate natural language
understanding capability of Korean language models. KLUE consists of 8 diverse and representative tasks, which are accessible
to anyone without any restrictions. With ethical considerations in mind, we deliberately design annotation guidelines to obtain
unambiguous annotations for all datasets. Futhermore, we build an evaluation system and carefully choose evaluations metrics
for every task, thus establishing fair comparison across Korean language models. | 13,782 | 13 |
kor_3i4k | false | [
"task_categories:text-classification",
"task_ids:intent-classification",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:ko",
"license:cc-by-4.0",
"arxiv:1811.04231"
] | This dataset is designed to identify speaker intention based on real-life spoken utterance in Korean into one of
7 categories: fragment, description, question, command, rhetorical question, rhetorical command, utterances. | 269 | 1 |
kor_hate | false | [
"task_categories:text-classification",
"task_ids:multi-label-classification",
"annotations_creators:crowdsourced",
"annotations_creators:expert-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:ko",
"license:cc-by-sa-4.0",
"arxiv:2005.12503"
] | Human-annotated Korean corpus collected from a popular domestic entertainment news aggregation platform
for toxic speech detection. Comments are annotated for gender bias, social bias and hate speech. | 294 | 3 |
kor_ner | false | [
"task_categories:token-classification",
"task_ids:named-entity-recognition",
"annotations_creators:expert-generated",
"language_creators:other",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:ko",
"license:mit"
] | Korean named entity recognition dataset | 313 | 0 |
kor_nli | false | [
"task_categories:text-classification",
"task_ids:natural-language-inference",
"task_ids:multi-input-text-classification",
"annotations_creators:crowdsourced",
"language_creators:machine-generated",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"source_datasets:extended|multi_nli",
"source_datasets:extended|snli",
"source_datasets:extended|xnli",
"language:ko",
"license:cc-by-sa-4.0"
] | Korean Natural Language Inference datasets | 575 | 1 |
kor_nlu | false | [
"task_categories:text-classification",
"task_ids:natural-language-inference",
"task_ids:semantic-similarity-scoring",
"task_ids:text-scoring",
"annotations_creators:found",
"language_creators:expert-generated",
"language_creators:found",
"language_creators:machine-generated",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"source_datasets:extended|snli",
"language:ko",
"license:cc-by-sa-4.0",
"arxiv:2004.03289"
] | The dataset contains data for bechmarking korean models on NLI and STS | 465 | 1 |
kor_qpair | false | [
"task_categories:text-classification",
"task_ids:semantic-similarity-classification",
"annotations_creators:expert-generated",
"language_creators:other",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:ko",
"license:mit"
] | This is a Korean paired question dataset containing labels indicating whether two questions in a given pair are semantically identical. This dataset was used to evaluate the performance of [KoGPT2](https://github.com/SKT-AI/KoGPT2#subtask-evaluations) on a phrase detection downstream task. | 302 | 2 |
kor_sae | false | [
"task_categories:text-classification",
"task_ids:intent-classification",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:ko",
"license:cc-by-sa-4.0",
"arxiv:1912.00342",
"arxiv:1811.04231"
] | This new dataset is designed to extract intent from non-canonical directives which will help dialog managers
extract intent from user dialog that may have no clear objective or are paraphrased forms of utterances. | 273 | 2 |
kor_sarcasm | false | [
"task_categories:text-classification",
"annotations_creators:expert-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:ko",
"license:mit",
"sarcasm-detection"
] | This is a dataset designed to detect sarcasm in Korean because it distorts the literal meaning of a sentence
and is highly related to sentiment classification. | 268 | 1 |
labr | false | [
"task_categories:text-classification",
"task_ids:multi-class-classification",
"annotations_creators:found",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:ar",
"license:unknown"
] | This dataset contains over 63,000 book reviews in Arabic.It is the largest sentiment analysis dataset for Arabic to-date.The book reviews were harvested from the website Goodreads during the month or March 2013.Each book review comes with the goodreads review id, the user id, the book id, the rating (1 to 5) and the text of the review. | 302 | 0 |
lama | false | [
"task_categories:text-retrieval",
"task_categories:text-classification",
"task_ids:fact-checking-retrieval",
"task_ids:text-scoring",
"annotations_creators:crowdsourced",
"annotations_creators:expert-generated",
"annotations_creators:machine-generated",
"language_creators:crowdsourced",
"language_creators:expert-generated",
"language_creators:machine-generated",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"size_categories:1K<n<10K",
"size_categories:1M<n<10M",
"size_categories:n<1K",
"source_datasets:extended|conceptnet5",
"source_datasets:extended|squad",
"language:en",
"license:cc-by-4.0",
"probing"
] | LAMA is a dataset used to probe and analyze the factual and commonsense knowledge contained in pretrained language models. See https://github.com/facebookresearch/LAMA. | 2,551 | 4 |
lambada | false | [
"task_categories:text2text-generation",
"annotations_creators:expert-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:extended|bookcorpus",
"language:en",
"license:cc-by-4.0",
"long-range-dependency"
] | The LAMBADA evaluates the capabilities of computational models
for text understanding by means of a word prediction task.
LAMBADA is a collection of narrative passages sharing the characteristic
that human subjects are able to guess their last word if
they are exposed to the whole passage, but not if they
only see the last sentence preceding the target word.
To succeed on LAMBADA, computational models cannot
simply rely on local context, but must be able to
keep track of information in the broader discourse.
The LAMBADA dataset is extracted from BookCorpus and
consists of 10'022 passages, divided into 4'869 development
and 5'153 test passages. The training data for language
models to be tested on LAMBADA include the full text
of 2'662 novels (disjoint from those in dev+test),
comprising 203 million words. | 16,151 | 17 |
large_spanish_corpus | false | [
"task_categories:other",
"annotations_creators:no-annotation",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"size_categories:100M<n<1B",
"size_categories:10K<n<100K",
"size_categories:10M<n<100M",
"size_categories:1M<n<10M",
"source_datasets:original",
"language:es",
"license:mit"
] | The Large Spanish Corpus is a compilation of 15 unlabelled Spanish corpora spanning Wikipedia to European parliament notes. Each config contains the data corresponding to a different corpus. For example, "all_wiki" only includes examples from Spanish Wikipedia. By default, the config is set to "combined" which loads all the corpora; with this setting you can also specify the number of samples to return per corpus by configuring the "split" argument. | 2,557 | 10 |
laroseda | false | [
"task_categories:text-classification",
"task_ids:sentiment-classification",
"annotations_creators:found",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:ro",
"license:cc-by-4.0",
"arxiv:2101.04197",
"arxiv:1901.06543"
] | LaRoSeDa (A Large Romanian Sentiment Data Set) contains 15,000 reviews written in Romanian, of which 7,500 are positive and 7,500 negative.
Star ratings of 1 and 2 and of 4 and 5 are provided for negative and positive reviews respectively.
The current dataset uses star rating as the label for multi-class classification. | 828 | 0 |
lc_quad | false | [
"task_categories:question-answering",
"annotations_creators:crowdsourced",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:en",
"license:cc-by-3.0",
"knowledge-base-qa"
] | LC-QuAD 2.0 is a Large Question Answering dataset with 30,000 pairs of question and its corresponding SPARQL query. The target knowledge base is Wikidata and DBpedia, specifically the 2018 version. Please see our paper for details about the dataset creation process and framework. | 332 | 0 |
lener_br | false | [
"task_categories:token-classification",
"task_ids:named-entity-recognition",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:pt",
"license:unknown"
] | LeNER-Br is a Portuguese language dataset for named entity recognition
applied to legal documents. LeNER-Br consists entirely of manually annotated
legislation and legal cases texts and contains tags for persons, locations,
time entities, organizations, legislation and legal cases.
To compose the dataset, 66 legal documents from several Brazilian Courts were
collected. Courts of superior and state levels were considered, such as Supremo
Tribunal Federal, Superior Tribunal de Justiça, Tribunal de Justiça de Minas
Gerais and Tribunal de Contas da União. In addition, four legislation documents
were collected, such as "Lei Maria da Penha", giving a total of 70 documents | 326 | 14 |
lex_glue | false | [
"task_categories:question-answering",
"task_categories:text-classification",
"task_ids:multi-class-classification",
"task_ids:multi-label-classification",
"task_ids:multiple-choice-qa",
"task_ids:topic-classification",
"annotations_creators:found",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:extended",
"language:en",
"license:cc-by-4.0",
"arxiv:2110.00976",
"arxiv:2109.00904",
"arxiv:1805.01217",
"arxiv:2104.08671"
] | Legal General Language Understanding Evaluation (LexGLUE) benchmark is
a collection of datasets for evaluating model performance across a diverse set of legal NLU tasks | 6,171 | 19 |
liar | false | [
"task_categories:text-classification",
"annotations_creators:expert-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:en",
"license:unknown",
"fake-news-detection",
"arxiv:1705.00648"
] | LIAR is a dataset for fake news detection with 12.8K human labeled short statements from politifact.com's API, and each statement is evaluated by a politifact.com editor for its truthfulness. The distribution of labels in the LIAR dataset is relatively well-balanced: except for 1,050 pants-fire cases, the instances for all other labels range from 2,063 to 2,638. In each case, the labeler provides a lengthy analysis report to ground each judgment. | 1,966 | 3 |
librispeech_asr | false | [
"task_categories:automatic-speech-recognition",
"task_categories:audio-classification",
"task_ids:speaker-identification",
"annotations_creators:expert-generated",
"language_creators:crowdsourced",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"source_datasets:original",
"language:en",
"license:cc-by-4.0"
] | LibriSpeech is a corpus of approximately 1000 hours of read English speech with sampling rate of 16 kHz,
prepared by Vassil Panayotov with the assistance of Daniel Povey. The data is derived from read
audiobooks from the LibriVox project, and has been carefully segmented and aligned.87 | 15,039 | 35 |
librispeech_lm | false | [
"task_categories:text-generation",
"task_ids:language-modeling",
"annotations_creators:no-annotation",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:10M<n<100M",
"source_datasets:original",
"language:en",
"license:cc0-1.0"
] | Language modeling resources to be used in conjunction with the LibriSpeech ASR corpus. | 269 | 0 |
limit | false | [
"task_categories:token-classification",
"task_categories:text-classification",
"task_ids:multi-class-classification",
"task_ids:named-entity-recognition",
"annotations_creators:crowdsourced",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:extended|net-activities-captions",
"source_datasets:original",
"language:en",
"license:cc-by-sa-4.0"
] | Motion recognition is one of the basic cognitive capabilities of many life forms, yet identifying motion of physical entities in natural language have not been explored extensively and empirically. Literal-Motion-in-Text (LiMiT) dataset, is a large human-annotated collection of English text sentences describing physical occurrence of motion, with annotated physical entities in motion. | 867 | 3 |
lince | false | [] | LinCE is a centralized Linguistic Code-switching Evaluation benchmark
(https://ritual.uh.edu/lince/) that contains data for training and evaluating
NLP systems on code-switching tasks. | 1,591 | 4 |
linnaeus | false | [
"task_categories:token-classification",
"task_ids:named-entity-recognition",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:en",
"license:unknown"
] | A novel corpus of full-text documents manually annotated for species mentions. | 271 | 1 |
liveqa | false | [
"task_categories:question-answering",
"task_ids:extractive-qa",
"annotations_creators:found",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:zh",
"license:unknown"
] | This is LiveQA, a Chinese dataset constructed from play-by-play live broadcast.
It contains 117k multiple-choice questions written by human commentators for over 1,670 NBA games,
which are collected from the Chinese Hupu website. | 271 | 0 |
lj_speech | false | [
"task_categories:automatic-speech-recognition",
"annotations_creators:expert-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:en",
"license:unlicense"
] | This is a public domain speech dataset consisting of 13,100 short audio clips of a single speaker reading
passages from 7 non-fiction books in English. A transcription is provided for each clip. Clips vary in length
from 1 to 10 seconds and have a total length of approximately 24 hours.
Note that in order to limit the required storage for preparing this dataset, the audio
is stored in the .wav format and is not converted to a float32 array. To convert the audio
file to a float32 array, please make use of the `.map()` function as follows:
```python
import soundfile as sf
def map_to_array(batch):
speech_array, _ = sf.read(batch["file"])
batch["speech"] = speech_array
return batch
dataset = dataset.map(map_to_array, remove_columns=["file"])
``` | 392 | 5 |
lm1b | false | [
"task_categories:text-generation",
"task_categories:fill-mask",
"task_ids:language-modeling",
"task_ids:masked-language-modeling"
] | A benchmark corpus to be used for measuring progress in statistical language modeling. This has almost one billion words in the training data. | 534 | 8 |
lst20 | false | [
"task_categories:token-classification",
"task_ids:named-entity-recognition",
"task_ids:part-of-speech",
"annotations_creators:expert-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:th",
"license:other",
"word-segmentation",
"clause-segmentation",
"sentence-segmentation"
] | LST20 Corpus is a dataset for Thai language processing developed by National Electronics and Computer Technology Center (NECTEC), Thailand.
It offers five layers of linguistic annotation: word boundaries, POS tagging, named entities, clause boundaries, and sentence boundaries.
At a large scale, it consists of 3,164,002 words, 288,020 named entities, 248,181 clauses, and 74,180 sentences, while it is annotated with
16 distinct POS tags. All 3,745 documents are also annotated with one of 15 news genres. Regarding its sheer size, this dataset is
considered large enough for developing joint neural models for NLP.
Manually download at https://aiforthai.in.th/corpus.php | 853 | 3 |
m_lama | false | [
"task_categories:question-answering",
"task_categories:text-classification",
"task_ids:open-domain-qa",
"task_ids:text-scoring",
"annotations_creators:crowdsourced",
"annotations_creators:expert-generated",
"annotations_creators:machine-generated",
"language_creators:crowdsourced",
"language_creators:expert-generated",
"language_creators:machine-generated",
"multilinguality:translation",
"size_categories:100K<n<1M",
"source_datasets:extended|lama",
"language:af",
"language:ar",
"language:az",
"language:be",
"language:bg",
"language:bn",
"language:ca",
"language:ceb",
"language:cs",
"language:cy",
"language:da",
"language:de",
"language:el",
"language:en",
"language:es",
"language:et",
"language:eu",
"language:fa",
"language:fi",
"language:fr",
"language:ga",
"language:gl",
"language:he",
"language:hi",
"language:hr",
"language:hu",
"language:hy",
"language:id",
"language:it",
"language:ja",
"language:ka",
"language:ko",
"language:la",
"language:lt",
"language:lv",
"language:ms",
"language:nl",
"language:pl",
"language:pt",
"language:ro",
"language:ru",
"language:sk",
"language:sl",
"language:sq",
"language:sr",
"language:sv",
"language:ta",
"language:th",
"language:tr",
"language:uk",
"language:ur",
"language:vi",
"language:zh",
"license:cc-by-nc-sa-4.0",
"probing",
"arxiv:2102.00894"
] | mLAMA: a multilingual version of the LAMA benchmark (T-REx and GoogleRE) covering 53 languages. | 269 | 1 |
mac_morpho | false | [
"task_categories:token-classification",
"task_ids:part-of-speech",
"annotations_creators:expert-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:pt",
"license:cc-by-4.0"
] | Mac-Morpho is a corpus of Brazilian Portuguese texts annotated with part-of-speech tags.
Its first version was released in 2003 [1], and since then, two revisions have been made in order
to improve the quality of the resource [2, 3].
The corpus is available for download split into train, development and test sections.
These are 76%, 4% and 20% of the corpus total, respectively (the reason for the unusual numbers
is that the corpus was first split into 80%/20% train/test, and then 5% of the train section was
set aside for development). This split was used in [3], and new POS tagging research with Mac-Morpho
is encouraged to follow it in order to make consistent comparisons possible.
[1] Aluísio, S., Pelizzoni, J., Marchi, A.R., de Oliveira, L., Manenti, R., Marquiafável, V. 2003.
An account of the challenge of tagging a reference corpus for brazilian portuguese.
In: Proceedings of the 6th International Conference on Computational Processing of the Portuguese Language. PROPOR 2003
[2] Fonseca, E.R., Rosa, J.L.G. 2013. Mac-morpho revisited: Towards robust part-of-speech.
In: Proceedings of the 9th Brazilian Symposium in Information and Human Language Technology – STIL
[3] Fonseca, E.R., Aluísio, Sandra Maria, Rosa, J.L.G. 2015.
Evaluating word embeddings and a revised corpus for part-of-speech tagging in Portuguese.
Journal of the Brazilian Computer Society. | 283 | 1 |
makhzan | false | [
"task_categories:text-generation",
"task_categories:fill-mask",
"task_ids:language-modeling",
"task_ids:masked-language-modeling",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:ur",
"license:other"
] | An Urdu text corpus for machine learning, natural language processing and linguistic analysis. | 269 | 0 |
masakhaner | false | [
"task_categories:token-classification",
"task_ids:named-entity-recognition",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"multilinguality:multilingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:am",
"language:ha",
"language:ig",
"language:lg",
"language:luo",
"language:pcm",
"language:rw",
"language:sw",
"language:wo",
"language:yo",
"license:unknown",
"arxiv:2103.11811"
] | MasakhaNER is the first large publicly available high-quality dataset for named entity recognition (NER) in ten African languages.
Named entities are phrases that contain the names of persons, organizations, locations, times and quantities.
Example:
[PER Wolff] , currently a journalist in [LOC Argentina] , played with [PER Del Bosque] in the final years of the seventies in [ORG Real Madrid] .
MasakhaNER is a named entity dataset consisting of PER, ORG, LOC, and DATE entities annotated by Masakhane for ten African languages:
- Amharic
- Hausa
- Igbo
- Kinyarwanda
- Luganda
- Luo
- Nigerian-Pidgin
- Swahili
- Wolof
- Yoruba
The train/validation/test sets are available for all the ten languages.
For more details see https://arxiv.org/abs/2103.11811 | 16,031 | 3 |
math_dataset | false | [
"language:en"
] | Mathematics database.
This dataset code generates mathematical question and answer pairs,
from a range of question types at roughly school-level difficulty.
This is designed to test the mathematical learning and algebraic
reasoning skills of learning models.
Original paper: Analysing Mathematical Reasoning Abilities of Neural Models
(Saxton, Grefenstette, Hill, Kohli).
Example usage:
train_examples, val_examples = datasets.load_dataset(
'math_dataset/arithmetic__mul',
split=['train', 'test'],
as_supervised=True) | 19,467 | 9 |
math_qa | false | [
"task_categories:question-answering",
"task_ids:multiple-choice-qa",
"annotations_creators:crowdsourced",
"language_creators:crowdsourced",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:extended|aqua_rat",
"language:en",
"license:apache-2.0"
] | Our dataset is gathered by using a new representation language to annotate over the AQuA-RAT dataset. AQuA-RAT has provided the questions, options, rationale, and the correct options. | 9,851 | 10 |
matinf | false | [] | MATINF is the first jointly labeled large-scale dataset for classification, question answering and summarization.
MATINF contains 1.07 million question-answer pairs with human-labeled categories and user-generated question
descriptions. Based on such rich information, MATINF is applicable for three major NLP tasks, including classification,
question answering, and summarization. We benchmark existing methods and a novel multi-task baseline over MATINF to
inspire further research. Our comprehensive comparison and experiments over MATINF and other datasets demonstrate the
merits held by MATINF. | 672 | 1 |
mbpp | false | [
"task_categories:text2text-generation",
"annotations_creators:crowdsourced",
"annotations_creators:expert-generated",
"language_creators:crowdsourced",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:n<1K",
"source_datasets:original",
"language:en",
"license:cc-by-4.0",
"code-generation",
"arxiv:2108.07732"
] | The MBPP (Mostly Basic Python Problems) dataset consists of around 1,000 crowd-sourced Python
programming problems, designed to be solvable by entry level programmers, covering programming
fundamentals, standard library functionality, and so on. Each problem consists of a task
description, code solution and 3 automated test cases. The sanitized subset of the data has been
hand-verified by the authors. | 30,716 | 13 |
mc4 | false | [
"task_categories:text-generation",
"task_categories:fill-mask",
"task_ids:language-modeling",
"task_ids:masked-language-modeling",
"annotations_creators:no-annotation",
"language_creators:found",
"multilinguality:multilingual",
"size_categories:n<1K",
"size_categories:1K<n<10K",
"size_categories:10K<n<100K",
"size_categories:100K<n<1M",
"size_categories:1M<n<10M",
"size_categories:10M<n<100M",
"size_categories:100M<n<1B",
"size_categories:1B<n<10B",
"source_datasets:original",
"language:af",
"language:am",
"language:ar",
"language:az",
"language:be",
"language:bg",
"language:bn",
"language:ca",
"language:ceb",
"language:co",
"language:cs",
"language:cy",
"language:da",
"language:de",
"language:el",
"language:en",
"language:eo",
"language:es",
"language:et",
"language:eu",
"language:fa",
"language:fi",
"language:fil",
"language:fr",
"language:fy",
"language:ga",
"language:gd",
"language:gl",
"language:gu",
"language:ha",
"language:haw",
"language:he",
"language:hi",
"language:hmn",
"language:ht",
"language:hu",
"language:hy",
"language:id",
"language:ig",
"language:is",
"language:it",
"language:iw",
"language:ja",
"language:jv",
"language:ka",
"language:kk",
"language:km",
"language:kn",
"language:ko",
"language:ku",
"language:ky",
"language:la",
"language:lb",
"language:lo",
"language:lt",
"language:lv",
"language:mg",
"language:mi",
"language:mk",
"language:ml",
"language:mn",
"language:mr",
"language:ms",
"language:mt",
"language:my",
"language:ne",
"language:nl",
"language:no",
"language:ny",
"language:pa",
"language:pl",
"language:ps",
"language:pt",
"language:ro",
"language:ru",
"language:sd",
"language:si",
"language:sk",
"language:sl",
"language:sm",
"language:sn",
"language:so",
"language:sq",
"language:sr",
"language:st",
"language:su",
"language:sv",
"language:sw",
"language:ta",
"language:te",
"language:tg",
"language:th",
"language:tr",
"language:uk",
"language:und",
"language:ur",
"language:uz",
"language:vi",
"language:xh",
"language:yi",
"language:yo",
"language:zh",
"language:zu",
"license:odc-by",
"arxiv:1910.10683"
] | A colossal, cleaned version of Common Crawl's web crawl corpus.
Based on Common Crawl dataset: "https://commoncrawl.org".
This is the processed version of Google's mC4 dataset by AllenAI. | 44,342 | 51 |
mc_taco | false | [
"task_categories:question-answering",
"task_ids:multiple-choice-qa",
"annotations_creators:crowdsourced",
"annotations_creators:machine-generated",
"language_creators:crowdsourced",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:en",
"license:unknown",
"arxiv:1909.03065"
] | MC-TACO (Multiple Choice TemporAl COmmonsense) is a dataset of 13k question-answer
pairs that require temporal commonsense comprehension. A system receives a sentence
providing context information, a question designed to require temporal commonsense
knowledge, and multiple candidate answers. More than one candidate answer can be plausible.
The task is framed as binary classification: givent he context, the question,
and the candidate answer, the task is to determine whether the candidate
answer is plausible ("yes") or not ("no"). | 1,471 | 0 |
md_gender_bias | false | [
"task_categories:text-classification",
"annotations_creators:crowdsourced",
"annotations_creators:found",
"annotations_creators:machine-generated",
"language_creators:crowdsourced",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"size_categories:10K<n<100K",
"size_categories:1K<n<10K",
"size_categories:1M<n<10M",
"size_categories:n<1K",
"source_datasets:extended|other-convai2",
"source_datasets:extended|other-light",
"source_datasets:extended|other-opensubtitles",
"source_datasets:extended|other-yelp",
"source_datasets:original",
"language:en",
"license:mit",
"gender-bias",
"arxiv:1811.00552"
] | Machine learning models are trained to find patterns in data.
NLP models can inadvertently learn socially undesirable patterns when training on gender biased text.
In this work, we propose a general framework that decomposes gender bias in text along several pragmatic and semantic dimensions:
bias from the gender of the person being spoken about, bias from the gender of the person being spoken to, and bias from the gender of the speaker.
Using this fine-grained framework, we automatically annotate eight large scale datasets with gender information.
In addition, we collect a novel, crowdsourced evaluation benchmark of utterance-level gender rewrites.
Distinguishing between gender bias along multiple dimensions is important, as it enables us to train finer-grained gender bias classifiers.
We show our classifiers prove valuable for a variety of important applications, such as controlling for gender bias in generative models,
detecting gender bias in arbitrary text, and shed light on offensive language in terms of genderedness. | 4,472 | 7 |
mdd | false | [
"task_categories:text-generation",
"task_categories:fill-mask",
"task_ids:dialogue-modeling",
"annotations_creators:no-annotation",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"size_categories:1M<n<10M",
"source_datasets:original",
"language:en",
"license:cc-by-3.0",
"arxiv:1511.06931"
] | The Movie Dialog dataset (MDD) is designed to measure how well
models can perform at goal and non-goal orientated dialog
centered around the topic of movies (question answering,
recommendation and discussion). | 2,187 | 2 |
med_hop | false | [
"task_categories:question-answering",
"task_ids:extractive-qa",
"annotations_creators:crowdsourced",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:en",
"license:cc-by-sa-3.0",
"multi-hop",
"arxiv:1710.06481"
] | MedHop is based on research paper abstracts from PubMed, and the queries are about interactions between pairs of drugs. The correct answer has to be inferred by combining information from a chain of reactions of drugs and proteins. | 554 | 2 |
medal | false | [
"task_categories:other",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:10M<n<100M",
"source_datasets:original",
"language:en",
"license:unknown",
"disambiguation"
] | A large medical text dataset (14Go) curated to 4Go for abbreviation disambiguation, designed for natural language understanding pre-training in the medical domain. For example, DHF can be disambiguated to dihydrofolate, diastolic heart failure, dengue hemorragic fever or dihydroxyfumarate | 860 | 5 |
medical_dialog | false | [
"task_categories:question-answering",
"task_ids:closed-domain-qa",
"annotations_creators:found",
"language_creators:expert-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:1M<n<10M",
"source_datasets:original",
"language:en",
"language:zh",
"license:unknown",
"arxiv:2004.03329"
] | The MedDialog dataset (English) contains conversations (in English) between doctors and patients.It has 0.26 million dialogues. The data is continuously growing and more dialogues will be added. The raw dialogues are from healthcaremagic.com and icliniq.com.
All copyrights of the data belong to healthcaremagic.com and icliniq.com. | 637 | 11 |
medical_questions_pairs | false | [
"task_categories:text-classification",
"task_ids:semantic-similarity-classification",
"annotations_creators:expert-generated",
"language_creators:other",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:en",
"license:unknown",
"arxiv:2008.13546"
] | This dataset consists of 3048 similar and dissimilar medical question pairs hand-generated and labeled by Curai's doctors. | 1,512 | 11 |
menyo20k_mt | false | [
"task_categories:translation",
"annotations_creators:expert-generated",
"annotations_creators:found",
"language_creators:found",
"multilinguality:translation",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:en",
"language:yo",
"license:cc-by-nc-4.0",
"arxiv:2103.08647"
] | MENYO-20k is a multi-domain parallel dataset with texts obtained from news articles, ted talks, movie transcripts, radio transcripts, science and technology texts, and other short articles curated from the web and professional translators. The dataset has 20,100 parallel sentences split into 10,070 training sentences, 3,397 development sentences, and 6,633 test sentences (3,419 multi-domain, 1,714 news domain, and 1,500 ted talks speech transcript domain). The development and test sets are available upon request. | 268 | 1 |
meta_woz | false | [
"task_categories:text-generation",
"task_categories:fill-mask",
"task_ids:dialogue-modeling",
"annotations_creators:crowdsourced",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:en",
"license:other",
"arxiv:2003.01680"
] | MetaLWOz: A Dataset of Multi-Domain Dialogues for the Fast Adaptation of Conversation Models. We introduce the Meta-Learning Wizard of Oz (MetaLWOz) dialogue dataset for developing fast adaptation methods for conversation models. This data can be used to train task-oriented dialogue models, specifically to develop methods to quickly simulate user responses with a small amount of data. Such fast-adaptation models fall into the research areas of transfer learning and meta learning. The dataset consists of 37,884 crowdsourced dialogues recorded between two human users in a Wizard of Oz setup, in which one was instructed to behave like a bot, and the other a true human user. The users are assigned a task belonging to a particular domain, for example booking a reservation at a particular restaurant, and work together to complete the task. Our dataset spans 47 domains having 227 tasks total. Dialogues are a minimum of 10 turns long. | 1,091 | 3 |
metooma | false | [
"task_categories:text-classification",
"task_categories:text-retrieval",
"task_ids:multi-class-classification",
"task_ids:multi-label-classification",
"annotations_creators:expert-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:en",
"license:cc0-1.0"
] | The dataset consists of tweets belonging to #MeToo movement on Twitter, labelled into different categories.
Due to Twitter's development policies, we only provide the tweet ID's and corresponding labels,
other data can be fetched via Twitter API.
The data has been labelled by experts, with the majority taken into the account for deciding the final label.
We provide these labels for each of the tweets. The labels provided for each data point
includes -- Relevance, Directed Hate, Generalized Hate,
Sarcasm, Allegation, Justification, Refutation, Support, Oppose | 271 | 0 |
metrec | false | [
"task_categories:text-classification",
"annotations_creators:no-annotation",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:ar",
"license:unknown",
"poetry-classification"
] | Arabic Poetry Metric Classification.
The dataset contains the verses and their corresponding meter classes.Meter classes are represented as numbers from 0 to 13. The dataset can be highly useful for further research in order to improve the field of Arabic poems’ meter classification.The train dataset contains 47,124 records and the test dataset contains 8316 records. | 278 | 2 |
miam | false | [
"task_categories:text-generation",
"task_categories:fill-mask",
"task_categories:text-classification",
"task_ids:dialogue-modeling",
"task_ids:language-modeling",
"task_ids:masked-language-modeling",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"multilinguality:multilingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:de",
"language:en",
"language:es",
"language:fr",
"language:it",
"license:cc-by-sa-4.0",
"dialogue-act-classification"
] | Multilingual dIalogAct benchMark is a collection of resources for training, evaluating, and
analyzing natural language understanding systems specifically designed for spoken language. Datasets
are in English, French, German, Italian and Spanish. They cover a variety of domains including
spontaneous speech, scripted scenarios, and joint task completion. Some datasets additionally include
emotion and/or sentimant labels. | 950 | 1 |
mkb | false | [
"task_categories:text-generation",
"task_categories:fill-mask",
"task_ids:language-modeling",
"task_ids:masked-language-modeling",
"annotations_creators:no-annotation",
"multilinguality:translation",
"size_categories:1K<n<10K",
"size_categories:n<1K",
"source_datasets:original",
"language:bn",
"language:en",
"language:gu",
"language:hi",
"language:ml",
"language:mr",
"language:or",
"language:pa",
"language:ta",
"language:te",
"language:ur",
"license:cc-by-4.0",
"arxiv:2007.07691"
] | The Prime Minister's speeches - Mann Ki Baat, on All India Radio, translated into many languages. | 6,052 | 1 |
mkqa | false | [
"task_categories:question-answering",
"task_ids:open-domain-qa",
"annotations_creators:crowdsourced",
"language_creators:found",
"multilinguality:multilingual",
"multilinguality:translation",
"size_categories:10K<n<100K",
"source_datasets:extended|natural_questions",
"source_datasets:original",
"language:ar",
"language:da",
"language:de",
"language:en",
"language:es",
"language:fi",
"language:fr",
"language:he",
"language:hu",
"language:it",
"language:ja",
"language:km",
"language:ko",
"language:ms",
"language:nl",
"language:no",
"language:pl",
"language:pt",
"language:ru",
"language:sv",
"language:th",
"language:tr",
"language:vi",
"language:zh",
"license:cc-by-3.0",
"arxiv:2007.15207"
] | We introduce MKQA, an open-domain question answering evaluation set comprising 10k question-answer pairs sampled from the Google Natural Questions dataset, aligned across 26 typologically diverse languages (260k question-answer pairs in total). For each query we collected new passage-independent answers. These queries and answers were then human translated into 25 Non-English languages. | 977 | 6 |
mlqa | false | [
"task_categories:question-answering",
"task_ids:extractive-qa",
"annotations_creators:crowdsourced",
"language_creators:crowdsourced",
"multilinguality:multilingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:en",
"language:de",
"language:es",
"language:ar",
"language:zh",
"language:vi",
"language:hi",
"license:cc-by-sa-3.0"
] | MLQA (MultiLingual Question Answering) is a benchmark dataset for evaluating cross-lingual question answering performance.
MLQA consists of over 5K extractive QA instances (12K in English) in SQuAD format in seven languages - English, Arabic,
German, Spanish, Hindi, Vietnamese and Simplified Chinese. MLQA is highly parallel, with QA instances parallel between
4 different languages on average. | 10,987 | 8 |
mlsum | false | [
"task_categories:summarization",
"task_categories:translation",
"task_categories:text-classification",
"task_ids:news-articles-summarization",
"task_ids:multi-class-classification",
"task_ids:multi-label-classification",
"task_ids:topic-classification",
"annotations_creators:found",
"language_creators:found",
"multilinguality:multilingual",
"size_categories:100K<n<1M",
"size_categories:10K<n<100K",
"source_datasets:extended|cnn_dailymail",
"source_datasets:original",
"language:de",
"language:es",
"language:fr",
"language:ru",
"language:tr",
"license:other"
] | We present MLSUM, the first large-scale MultiLingual SUMmarization dataset.
Obtained from online newspapers, it contains 1.5M+ article/summary pairs in five different languages -- namely, French, German, Spanish, Russian, Turkish.
Together with English newspapers from the popular CNN/Daily mail dataset, the collected data form a large scale multilingual dataset which can enable new research directions for the text summarization community.
We report cross-lingual comparative analyses based on state-of-the-art systems.
These highlight existing biases which motivate the use of a multi-lingual dataset. | 5,283 | 13 |
mnist | false | [
"task_categories:image-classification",
"task_ids:multi-class-image-classification",
"annotations_creators:expert-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:extended|other-nist",
"language:en",
"license:mit"
] | The MNIST dataset consists of 70,000 28x28 black-and-white images in 10 classes (one for each digits), with 7,000
images per class. There are 60,000 training images and 10,000 test images. | 9,528 | 23 |
mocha | false | [
"task_categories:question-answering",
"annotations_creators:crowdsourced",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:en",
"license:cc-by-sa-4.0",
"generative-reading-comprehension-metric"
] | Posing reading comprehension as a generation problem provides a great deal of flexibility, allowing for open-ended questions with few restrictions on possible answers. However, progress is impeded by existing generation metrics, which rely on token overlap and are agnostic to the nuances of reading comprehension. To address this, we introduce a benchmark for training and evaluating generative reading comprehension metrics: MOdeling Correctness with Human Annotations. MOCHA contains 40K human judgement scores on model outputs from 6 diverse question answering datasets and an additional set of minimal pairs for evaluation. Using MOCHA, we train an evaluation metric: LERC, a Learned Evaluation metric for Reading Comprehension, to mimic human judgement scores. | 853 | 0 |
moroco | false | [
"task_categories:text-classification",
"task_ids:topic-classification",
"annotations_creators:found",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:ro",
"license:cc-by-4.0",
"arxiv:1901.06543"
] | The MOROCO (Moldavian and Romanian Dialectal Corpus) dataset contains 33564 samples of text collected from the news domain.
The samples belong to one of the following six topics:
- culture
- finance
- politics
- science
- sports
- tech | 294 | 0 |
movie_rationales | false | [
"task_categories:text-classification",
"task_ids:sentiment-classification",
"annotations_creators:crowdsourced",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:en",
"license:unknown"
] | The movie rationale dataset contains human annotated rationales for movie
reviews. | 1,186 | 2 |
mrqa | false | [
"task_categories:question-answering",
"task_ids:extractive-qa",
"annotations_creators:found",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"source_datasets:extended|drop",
"source_datasets:extended|hotpot_qa",
"source_datasets:extended|natural_questions",
"source_datasets:extended|race",
"source_datasets:extended|search_qa",
"source_datasets:extended|squad",
"source_datasets:extended|trivia_qa",
"language:en",
"license:unknown",
"arxiv:1910.09753",
"arxiv:1606.05250",
"arxiv:1611.09830",
"arxiv:1705.03551",
"arxiv:1704.05179",
"arxiv:1809.09600",
"arxiv:1903.00161",
"arxiv:1804.07927",
"arxiv:1704.04683",
"arxiv:1706.04115"
] | The MRQA 2019 Shared Task focuses on generalization in question answering.
An effective question answering system should do more than merely
interpolate from the training set to answer test examples drawn
from the same distribution: it should also be able to extrapolate
to out-of-distribution examples — a significantly harder challenge.
The dataset is a collection of 18 existing QA dataset (carefully selected
subset of them) and converted to the same format (SQuAD format). Among
these 18 datasets, six datasets were made available for training,
six datasets were made available for development, and the final six
for testing. The dataset is released as part of the MRQA 2019 Shared Task. | 728 | 6 |
ms_marco | false | [
"language:en",
"arxiv:1611.09268"
] | Starting with a paper released at NIPS 2016, MS MARCO is a collection of datasets focused on deep learning in search.
The first dataset was a question answering dataset featuring 100,000 real Bing questions and a human generated answer.
Since then we released a 1,000,000 question dataset, a natural langauge generation dataset, a passage ranking dataset,
keyphrase extraction dataset, crawling dataset, and a conversational search.
There have been 277 submissions. 20 KeyPhrase Extraction submissions, 87 passage ranking submissions, 0 document ranking
submissions, 73 QnA V2 submissions, 82 NLGEN submisions, and 15 QnA V1 submissions
This data comes in three tasks/forms: Original QnA dataset(v1.1), Question Answering(v2.1), Natural Language Generation(v2.1).
The original question answering datset featured 100,000 examples and was released in 2016. Leaderboard is now closed but data is availible below.
The current competitive tasks are Question Answering and Natural Language Generation. Question Answering features over 1,000,000 queries and
is much like the original QnA dataset but bigger and with higher quality. The Natural Language Generation dataset features 180,000 examples and
builds upon the QnA dataset to deliver answers that could be spoken by a smart speaker. | 6,792 | 13 |
ms_terms | false | [
"task_categories:translation",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"multilinguality:multilingual",
"multilinguality:translation",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:af",
"language:am",
"language:ar",
"language:as",
"language:az",
"language:be",
"language:bg",
"language:bn",
"language:bs",
"language:ca",
"language:chr",
"language:cs",
"language:cy",
"language:da",
"language:de",
"language:el",
"language:en",
"language:es",
"language:et",
"language:eu",
"language:fa",
"language:fi",
"language:fil",
"language:fr",
"language:ga",
"language:gd",
"language:gl",
"language:gu",
"language:guc",
"language:ha",
"language:he",
"language:hi",
"language:hr",
"language:hu",
"language:hy",
"language:id",
"language:ig",
"language:is",
"language:it",
"language:iu",
"language:ja",
"language:ka",
"language:kk",
"language:km",
"language:kn",
"language:knn",
"language:ko",
"language:ku",
"language:ky",
"language:lb",
"language:lo",
"language:lt",
"language:lv",
"language:mi",
"language:mk",
"language:ml",
"language:mn",
"language:mr",
"language:ms",
"language:mt",
"language:nb",
"language:ne",
"language:nl",
"language:nn",
"language:ory",
"language:pa",
"language:pl",
"language:prs",
"language:pst",
"language:pt",
"language:qu",
"language:quc",
"language:ro",
"language:ru",
"language:rw",
"language:sd",
"language:si",
"language:sk",
"language:sl",
"language:sq",
"language:sr",
"language:st",
"language:sv",
"language:swh",
"language:ta",
"language:te",
"language:tg",
"language:th",
"language:ti",
"language:tk",
"language:tn",
"language:tr",
"language:tt",
"language:ug",
"language:uk",
"language:ur",
"language:uz",
"language:vi",
"language:wo",
"language:xh",
"language:yo",
"language:zh",
"language:zu",
"license:ms-pl"
] | The Microsoft Terminology Collection can be used to develop localized versions of applications that integrate with Microsoft products.
It can also be used to integrate Microsoft terminology into other terminology collections or serve as a base IT glossary
for language development in the nearly 100 languages available. Terminology is provided in .tbx format, an industry standard for terminology exchange. | 268 | 0 |
msr_genomics_kbcomp | false | [
"task_categories:other",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:en",
"license:other",
"genomics-knowledge-base-bompletion"
] | The database is derived from the NCI PID Pathway Interaction Database, and the textual mentions are extracted from cooccurring pairs of genes in PubMed abstracts, processed and annotated by Literome (Poon et al. 2014). This dataset was used in the paper “Compositional Learning of Embeddings for Relation Paths in Knowledge Bases and Text” (Toutanova, Lin, Yih, Poon, and Quirk, 2016). | 269 | 0 |
msr_sqa | false | [
"task_categories:question-answering",
"task_ids:extractive-qa",
"annotations_creators:crowdsourced",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:en",
"license:ms-pl"
] | Recent work in semantic parsing for question answering has focused on long and complicated questions, many of which would seem unnatural if asked in a normal conversation between two humans. In an effort to explore a conversational QA setting, we present a more realistic task: answering sequences of simple but inter-related questions. We created SQA by asking crowdsourced workers to decompose 2,022 questions from WikiTableQuestions (WTQ), which contains highly-compositional questions about tables from Wikipedia. We had three workers decompose each WTQ question, resulting in a dataset of 6,066 sequences that contain 17,553 questions in total. Each question is also associated with answers in the form of cell locations in the tables. | 368 | 0 |
msr_text_compression | false | [
"task_categories:summarization",
"annotations_creators:crowdsourced",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:extended|other-Open-American-National-Corpus-(OANC1)",
"language:en",
"license:other"
] | This dataset contains sentences and short paragraphs with corresponding shorter (compressed) versions. There are up to five compressions for each input text, together with quality judgements of their meaning preservation and grammaticality. The dataset is derived using source texts from the Open American National Corpus (ww.anc.org) and crowd-sourcing. | 282 | 2 |
msr_zhen_translation_parity | false | [
"task_categories:translation",
"annotations_creators:no-annotation",
"language_creators:expert-generated",
"language_creators:machine-generated",
"multilinguality:monolingual",
"multilinguality:translation",
"size_categories:1K<n<10K",
"source_datasets:extended|other-newstest2017",
"language:en",
"license:ms-pl"
] | Translator Human Parity Data
Human evaluation results and translation output for the Translator Human Parity Data release,
as described in https://blogs.microsoft.com/ai/machine-translation-news-test-set-human-parity/.
The Translator Human Parity Data release contains all human evaluation results and translations
related to our paper "Achieving Human Parity on Automatic Chinese to English News Translation",
published on March 14, 2018. | 279 | 0 |
msra_ner | false | [
"task_categories:token-classification",
"task_ids:named-entity-recognition",
"annotations_creators:crowdsourced",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:zh",
"license:unknown"
] | The Third International Chinese Language
Processing Bakeoff was held in Spring
2006 to assess the state of the art in two
important tasks: word segmentation and
named entity recognition. Twenty-nine
groups submitted result sets in the two
tasks across two tracks and a total of five
corpora. We found strong results in both
tasks as well as continuing challenges.
MSRA NER is one of the provided dataset.
There are three types of NE, PER (person),
ORG (organization) and LOC (location).
The dataset is in the BIO scheme.
For more details see https://faculty.washington.edu/levow/papers/sighan06.pdf | 751 | 11 |
mt_eng_vietnamese | false | [
"task_categories:translation",
"annotations_creators:found",
"language_creators:found",
"multilinguality:multilingual",
"size_categories:100K<n<1M",
"source_datasets:original",
"language:en",
"language:vi",
"license:unknown"
] | Preprocessed Dataset from IWSLT'15 English-Vietnamese machine translation: English-Vietnamese. | 575 | 7 |
muchocine | false | [
"task_categories:text-classification",
"task_ids:sentiment-classification",
"annotations_creators:found",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:es",
"license:unknown"
] | The Muchocine reviews dataset contains 3,872 longform movie reviews in Spanish language,
each with a shorter summary review, and a rating on a 1-5 scale. | 270 | 4 |
multi_booked | false | [
"task_categories:text-classification",
"task_ids:sentiment-classification",
"annotations_creators:expert-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:n<1K",
"source_datasets:original",
"language:ca",
"language:eu",
"license:cc-by-3.0",
"arxiv:1803.08614"
] | MultiBooked is a corpus of Basque and Catalan Hotel Reviews Annotated for Aspect-level Sentiment Classification.
The corpora are compiled from hotel reviews taken mainly from booking.com. The corpora are in Kaf/Naf format, which is
an xml-style stand-off format that allows for multiple layers of annotation. Each review was sentence- and
word-tokenized and lemmatized using Freeling for Catalan and ixa-pipes for Basque. Finally, for each language two
annotators annotated opinion holders, opinion targets, and opinion expressions for each review, following the
guidelines set out in the OpeNER project. | 399 | 0 |
multi_eurlex | false | [
"task_categories:text-classification",
"task_ids:multi-label-classification",
"task_ids:topic-classification",
"annotations_creators:found",
"language_creators:found",
"multilinguality:multilingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:bg",
"language:cs",
"language:da",
"language:de",
"language:el",
"language:en",
"language:es",
"language:et",
"language:fi",
"language:fr",
"language:hr",
"language:hu",
"language:it",
"language:lt",
"language:lv",
"language:mt",
"language:nl",
"language:pl",
"language:pt",
"language:ro",
"language:sk",
"language:sl",
"language:sv",
"license:cc-by-sa-4.0",
"arxiv:2109.00904"
] | MultiEURLEX comprises 65k EU laws in 23 official EU languages (some low-ish resource).
Each EU law has been annotated with EUROVOC concepts (labels) by the Publication Office of EU.
As with the English EURLEX, the goal is to predict the relevant EUROVOC concepts (labels);
this is multi-label classification task (given the text, predict multiple labels). | 3,776 | 12 |
multi_news | false | [
"task_categories:summarization",
"task_ids:news-articles-summarization",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:en",
"license:other",
"arxiv:1906.01749"
] | Multi-News, consists of news articles and human-written summaries
of these articles from the site newser.com.
Each summary is professionally written by editors and
includes links to the original articles cited.
There are two features:
- document: text of news articles seperated by special token "|||||".
- summary: news summary. | 10,765 | 14 |
multi_nli | false | [
"task_categories:text-classification",
"task_ids:natural-language-inference",
"task_ids:multi-input-text-classification",
"annotations_creators:crowdsourced",
"language_creators:crowdsourced",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"source_datasets:original",
"language:en",
"license:cc-by-3.0",
"license:cc-by-sa-3.0",
"license:mit",
"license:other"
] | The Multi-Genre Natural Language Inference (MultiNLI) corpus is a
crowd-sourced collection of 433k sentence pairs annotated with textual
entailment information. The corpus is modeled on the SNLI corpus, but differs in
that covers a range of genres of spoken and written text, and supports a
distinctive cross-genre generalization evaluation. The corpus served as the
basis for the shared task of the RepEval 2017 Workshop at EMNLP in Copenhagen. | 5,186 | 23 |
multi_nli_mismatch | false | [
"task_categories:text-classification",
"task_ids:natural-language-inference",
"task_ids:multi-input-text-classification",
"annotations_creators:crowdsourced",
"language_creators:crowdsourced",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"source_datasets:original",
"language:en",
"license:cc-by-3.0",
"license:cc-by-sa-3.0",
"license:mit",
"license:other"
] | The Multi-Genre Natural Language Inference (MultiNLI) corpus is a
crowd-sourced collection of 433k sentence pairs annotated with textual
entailment information. The corpus is modeled on the SNLI corpus, but differs in
that covers a range of genres of spoken and written text, and supports a
distinctive cross-genre generalization evaluation. The corpus served as the
basis for the shared task of the RepEval 2017 Workshop at EMNLP in Copenhagen. | 270 | 1 |
multi_para_crawl | false | [
"task_categories:translation",
"annotations_creators:found",
"language_creators:found",
"multilinguality:multilingual",
"size_categories:100K<n<1M",
"source_datasets:original",
"language:bg",
"language:ca",
"language:cs",
"language:da",
"language:de",
"language:el",
"language:es",
"language:et",
"language:eu",
"language:fi",
"language:fr",
"language:ga",
"language:gl",
"language:ha",
"language:hr",
"language:hu",
"language:ig",
"language:is",
"language:it",
"language:km",
"language:lt",
"language:lv",
"language:mt",
"language:my",
"language:nb",
"language:ne",
"language:nl",
"language:nn",
"language:pl",
"language:ps",
"language:pt",
"language:ro",
"language:ru",
"language:si",
"language:sk",
"language:sl",
"language:so",
"language:sv",
"language:sw",
"language:tl",
"license:cc0-1.0"
] | Parallel corpora from Web Crawls collected in the ParaCrawl project and further processed for making it a multi-parallel corpus by pivoting via English. Here we only provide the additional language pairs that came out of pivoting. The bitexts for English are available from the ParaCrawl release.
40 languages, 669 bitexts
total number of files: 40
total number of tokens: 10.14G
total number of sentence fragments: 505.48M
Please, acknowledge the ParaCrawl project at http://paracrawl.eu. This version is derived from the original release at their website adjusted for redistribution via the OPUS corpus collection. Please, acknowledge OPUS as well for this service. | 797 | 0 |
multi_re_qa | false | [
"task_categories:question-answering",
"task_ids:extractive-qa",
"task_ids:open-domain-qa",
"annotations_creators:expert-generated",
"annotations_creators:found",
"language_creators:expert-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"size_categories:10K<n<100K",
"size_categories:1K<n<10K",
"size_categories:1M<n<10M",
"source_datasets:extended|other-BioASQ",
"source_datasets:extended|other-DuoRC",
"source_datasets:extended|other-HotpotQA",
"source_datasets:extended|other-Natural-Questions",
"source_datasets:extended|other-Relation-Extraction",
"source_datasets:extended|other-SQuAD",
"source_datasets:extended|other-SearchQA",
"source_datasets:extended|other-TextbookQA",
"source_datasets:extended|other-TriviaQA",
"language:en",
"license:unknown",
"arxiv:2005.02507"
] | MultiReQA contains the sentence boundary annotation from eight publicly available QA datasets including SearchQA, TriviaQA, HotpotQA, NaturalQuestions, SQuAD, BioASQ, RelationExtraction, and TextbookQA. Five of these datasets, including SearchQA, TriviaQA, HotpotQA, NaturalQuestions, SQuAD, contain both training and test data, and three, including BioASQ, RelationExtraction, TextbookQA, contain only the test data | 1,336 | 0 |
multi_woz_v22 | false | [
"task_categories:text-generation",
"task_categories:fill-mask",
"task_categories:token-classification",
"task_categories:text-classification",
"task_ids:dialogue-modeling",
"task_ids:multi-class-classification",
"task_ids:parsing",
"annotations_creators:machine-generated",
"language_creators:crowdsourced",
"language_creators:machine-generated",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:en",
"license:apache-2.0",
"arxiv:1810.00278"
] | Multi-Domain Wizard-of-Oz dataset (MultiWOZ), a fully-labeled collection of human-human written conversations spanning over multiple domains and topics.
MultiWOZ 2.1 (Eric et al., 2019) identified and fixed many erroneous annotations and user utterances in the original version, resulting in an
improved version of the dataset. MultiWOZ 2.2 is a yet another improved version of this dataset, which identifies and fizes dialogue state annotation errors
across 17.3% of the utterances on top of MultiWOZ 2.1 and redefines the ontology by disallowing vocabularies of slots with a large number of possible values
(e.g., restaurant name, time of booking) and introducing standardized slot span annotations for these slots. | 1,900 | 9 |
multi_x_science_sum | false | [
"task_categories:summarization",
"annotations_creators:found",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:en",
"license:unknown",
"paper-abstract-generation",
"arxiv:2010.14235"
] | Multi-XScience, a large-scale multi-document summarization dataset created from scientific articles. Multi-XScience introduces a challenging multi-document summarization task: writing the related-work section of a paper based on its abstract and the articles it references. | 2,224 | 7 |
multidoc2dial | false | [
"task_categories:question-answering",
"task_ids:open-domain-qa",
"annotations_creators:crowdsourced",
"language_creators:crowdsourced",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"size_categories:1K<n<10K",
"size_categories:n<1K",
"source_datasets:extended|doc2dial",
"language:en",
"license:apache-2.0",
"arxiv:2109.12595"
] | MultiDoc2Dial is a new task and dataset on modeling goal-oriented dialogues grounded in multiple documents. Most previous works treat document-grounded dialogue modeling as a machine reading comprehension task based on a single given document or passage. We aim to address more realistic scenarios where a goal-oriented information-seeking conversation involves multiple topics, and hence is grounded on different documents. | 602 | 2 |
multilingual_librispeech | false | [
"task_categories:automatic-speech-recognition",
"task_categories:audio-classification",
"task_ids:speaker-identification",
"annotations_creators:expert-generated",
"language_creators:crowdsourced",
"language_creators:expert-generated",
"multilinguality:multilingual",
"size_categories:100K<n<1M",
"source_datasets:original",
"language:de",
"language:es",
"language:fr",
"language:it",
"language:nl",
"language:pl",
"language:pt",
"license:cc-by-4.0",
"arxiv:2012.03411"
] | Multilingual LibriSpeech (MLS) dataset is a large multilingual corpus suitable for speech research. The dataset is derived from read audiobooks from LibriVox and consists of 8 languages - English, German, Dutch, Spanish, French, Italian, Portuguese, Polish. | 1,073 | 5 |
mutual_friends | false | [
"task_categories:text-generation",
"task_categories:fill-mask",
"task_ids:dialogue-modeling",
"annotations_creators:crowdsourced",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:en",
"license:unknown",
"arxiv:1704.07130"
] | Our goal is to build systems that collaborate with people by exchanging
information through natural language and reasoning over structured knowledge
base. In the MutualFriend task, two agents, A and B, each have a private
knowledge base, which contains a list of friends with multiple attributes
(e.g., name, school, major, etc.). The agents must chat with each other
to find their unique mutual friend. | 269 | 2 |
mwsc | false | [
"task_categories:multiple-choice",
"task_ids:multiple-choice-coreference-resolution",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:n<1K",
"source_datasets:extended|winograd_wsc",
"language:en",
"license:cc-by-4.0",
"arxiv:1806.08730"
] | Examples taken from the Winograd Schema Challenge modified to ensure that answers are a single word from the context.
This modified Winograd Schema Challenge (MWSC) ensures that scores are neither inflated nor deflated by oddities in phrasing. | 1,127 | 0 |
myanmar_news | false | [
"task_categories:text-classification",
"task_ids:topic-classification",
"annotations_creators:found",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:my",
"license:gpl-3.0"
] | The Myanmar news dataset contains article snippets in four categories:
Business, Entertainment, Politics, and Sport.
These were collected in October 2017 by Aye Hninn Khine | 269 | 0 |
narrativeqa | false | [
"task_categories:text2text-generation",
"task_ids:abstractive-qa",
"annotations_creators:crowdsourced",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:en",
"license:apache-2.0",
"arxiv:1712.07040"
] | The NarrativeQA dataset for question answering on long documents (movie scripts, books). It includes the list of documents with Wikipedia summaries, links to full stories, and questions and answers. | 1,009 | 0 |
narrativeqa_manual | false | [
"task_categories:text2text-generation",
"task_ids:abstractive-qa",
"annotations_creators:crowdsourced",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:en",
"license:apache-2.0",
"arxiv:1712.07040"
] | The Narrative QA Manual dataset is a reading comprehension dataset, in which the reader must answer questions about stories by reading entire books or movie scripts. The QA tasks are designed so that successfully answering their questions requires understanding the underlying narrative rather than relying on shallow pattern matching or salience.\THIS DATASET REQUIRES A MANUALLY DOWNLOADED FILE! Because of a script in the original repository which downloads the stories from original URLs everytime, The links are sometimes broken or invalid. Therefore, you need to manually download the stories for this dataset using the script provided by the authors (https://github.com/deepmind/narrativeqa/blob/master/download_stories.sh). Running the shell script creates a folder named "tmp" in the root directory and downloads the stories there. This folder containing the storiescan be used to load the dataset via `datasets.load_dataset("narrativeqa_manual", data_dir="<path/to/folder>")`. | 294 | 0 |
natural_questions | false | [
"task_categories:question-answering",
"task_ids:open-domain-qa",
"annotations_creators:no-annotation",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"source_datasets:original",
"language:en",
"license:cc-by-sa-3.0"
] | The NQ corpus contains questions from real users, and it requires QA systems to
read and comprehend an entire Wikipedia article that may or may not contain the
answer to the question. The inclusion of real user questions, and the
requirement that solutions should read an entire page to find the answer, cause
NQ to be a more realistic and challenging task than prior QA datasets. | 1,304 | 12 |
ncbi_disease | false | [
"task_categories:token-classification",
"task_ids:named-entity-recognition",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:en",
"license:unknown"
] | This paper presents the disease name and concept annotations of the NCBI disease corpus, a collection of 793 PubMed
abstracts fully annotated at the mention and concept level to serve as a research resource for the biomedical natural
language processing community. Each PubMed abstract was manually annotated by two annotators with disease mentions
and their corresponding concepts in Medical Subject Headings (MeSH®) or Online Mendelian Inheritance in Man (OMIM®).
Manual curation was performed using PubTator, which allowed the use of pre-annotations as a pre-step to manual annotations.
Fourteen annotators were randomly paired and differing annotations were discussed for reaching a consensus in two
annotation phases. In this setting, a high inter-annotator agreement was observed. Finally, all results were checked
against annotations of the rest of the corpus to assure corpus-wide consistency.
For more details, see: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3951655/
The original dataset can be downloaded from: https://www.ncbi.nlm.nih.gov/CBBresearch/Dogan/DISEASE/NCBI_corpus.zip
This dataset has been converted to CoNLL format for NER using the following tool: https://github.com/spyysalo/standoff2conll
Note: there is a duplicate document (PMID 8528200) in the original data, and the duplicate is recreated in the converted data. | 2,686 | 15 |
nchlt | false | [
"task_categories:token-classification",
"task_ids:named-entity-recognition",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"multilinguality:multilingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:af",
"language:nr",
"language:nso",
"language:ss",
"language:tn",
"language:ts",
"language:ve",
"language:xh",
"language:zu",
"license:cc-by-2.5"
] | The development of linguistic resources for use in natural language processingis of utmost importance for the continued growth of research anddevelopment in the field, especially for resource-scarce languages. In this paper we describe the process and challenges of simultaneouslydevelopingmultiple linguistic resources for ten of the official languages of South Africa. The project focussed on establishing a set of foundational resources that can foster further development of both resources and technologies for the NLP industry in South Africa. The development efforts during the project included creating monolingual unannotated corpora, of which a subset of the corpora for each language was annotated on token, orthographic, morphological and morphosyntactic layers. The annotated subsetsincludes both development and test setsand were used in the creation of five core-technologies, viz. atokeniser, sentenciser,lemmatiser, part of speech tagger and morphological decomposer for each language. We report on the quality of these tools for each language and provide some more context of the importance of the resources within the South African context. | 1,463 | 2 |
ncslgr | false | [
"task_categories:translation",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"multilinguality:translation",
"size_categories:n<1K",
"source_datasets:original",
"language:ase",
"language:en",
"license:mit"
] | A small corpus of American Sign Language (ASL) video data from native signers, annotated with non-manual features. | 401 | 2 |