id
stringlengths
2
115
private
bool
1 class
tags
sequence
description
stringlengths
0
5.93k
downloads
int64
0
1.14M
likes
int64
0
1.79k
coarse_discourse
false
[ "task_categories:text-classification", "task_ids:multi-class-classification", "annotations_creators:crowdsourced", "language_creators:found", "multilinguality:monolingual", "size_categories:100K<n<1M", "source_datasets:original", "language:en", "license:cc-by-4.0" ]
dataset contains discourse annotation and relation on threads from reddit during 2016
392
0
codah
false
[ "task_categories:question-answering", "task_ids:multiple-choice-qa", "annotations_creators:crowdsourced", "language_creators:crowdsourced", "multilinguality:monolingual", "size_categories:1K<n<10K", "source_datasets:original", "language:en", "license:unknown" ]
The COmmonsense Dataset Adversarially-authored by Humans (CODAH) is an evaluation set for commonsense question-answering in the sentence completion style of SWAG. As opposed to other automatically generated NLI datasets, CODAH is adversarially constructed by humans who can view feedback from a pre-trained model and use this information to design challenging commonsense questions. Our experimental results show that CODAH questions present a complementary extension to the SWAG dataset, testing additional modes of common sense.
5,548
2
code_search_net
false
[ "task_categories:text-generation", "task_categories:fill-mask", "task_ids:language-modeling", "task_ids:masked-language-modeling", "annotations_creators:no-annotation", "language_creators:machine-generated", "multilinguality:multilingual", "size_categories:100K<n<1M", "size_categories:10K<n<100K", "size_categories:1M<n<10M", "source_datasets:original", "language:code", "license:other", "arxiv:1909.09436" ]
CodeSearchNet corpus contains about 6 million functions from open-source code spanning six programming languages (Go, Java, JavaScript, PHP, Python, and Ruby). The CodeSearchNet Corpus also contains automatically generated query-like natural language for 2 million functions, obtained from mechanically scraping and preprocessing associated function documentation.
5,750
55
code_x_glue_cc_clone_detection_big_clone_bench
false
[ "task_categories:text-classification", "task_ids:semantic-similarity-classification", "annotations_creators:found", "language_creators:found", "multilinguality:monolingual", "size_categories:1M<n<10M", "source_datasets:original", "language:code", "license:c-uda" ]
Given two codes as the input, the task is to do binary classification (0/1), where 1 stands for semantic equivalence and 0 for others. Models are evaluated by F1 score. The dataset we use is BigCloneBench and filtered following the paper Detecting Code Clones with Graph Neural Network and Flow-Augmented Abstract Syntax Tree.
616
2
code_x_glue_cc_clone_detection_poj104
false
[ "task_categories:text-retrieval", "task_ids:document-retrieval", "annotations_creators:found", "language_creators:found", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:original", "language:code", "license:c-uda" ]
Given a code and a collection of candidates as the input, the task is to return Top K codes with the same semantic. Models are evaluated by MAP score. We use POJ-104 dataset on this task.
275
0
code_x_glue_cc_cloze_testing_all
false
[ "task_categories:text-generation", "task_categories:fill-mask", "task_ids:slot-filling", "annotations_creators:found", "language_creators:found", "multilinguality:monolingual", "size_categories:10K<n<100K", "size_categories:1K<n<10K", "source_datasets:original", "language:code", "license:c-uda" ]
Cloze tests are widely adopted in Natural Languages Processing to evaluate the performance of the trained language models. The task is aimed to predict the answers for the blank with the context of the blank, which can be formulated as a multi-choice classification problem. Here we present the two cloze testing datasets in code domain with six different programming languages: ClozeTest-maxmin and ClozeTest-all. Each instance in the dataset contains a masked code function, its docstring and the target word. The only difference between ClozeTest-maxmin and ClozeTest-all is their selected words sets, where ClozeTest-maxmin only contains two words while ClozeTest-all contains 930 words.
949
0
code_x_glue_cc_cloze_testing_maxmin
false
[ "task_categories:text-generation", "task_categories:fill-mask", "task_ids:slot-filling", "annotations_creators:found", "language_creators:found", "multilinguality:monolingual", "size_categories:10K<n<100K", "size_categories:1K<n<10K", "source_datasets:original", "language:code", "license:c-uda" ]
Cloze tests are widely adopted in Natural Languages Processing to evaluate the performance of the trained language models. The task is aimed to predict the answers for the blank with the context of the blank, which can be formulated as a multi-choice classification problem. Here we present the two cloze testing datasets in code domain with six different programming languages: ClozeTest-maxmin and ClozeTest-all. Each instance in the dataset contains a masked code function, its docstring and the target word. The only difference between ClozeTest-maxmin and ClozeTest-all is their selected words sets, where ClozeTest-maxmin only contains two words while ClozeTest-all contains 930 words.
942
1
code_x_glue_cc_code_completion_line
false
[ "task_categories:text-generation", "task_categories:fill-mask", "task_ids:slot-filling", "annotations_creators:found", "language_creators:found", "multilinguality:monolingual", "size_categories:1K<n<10K", "size_categories:n<1K", "source_datasets:original", "language:code", "license:c-uda" ]
Complete the unfinished line given previous context. Models are evaluated by exact match and edit similarity. We propose line completion task to test model's ability to autocomplete a line. Majority code completion systems behave well in token level completion, but fail in completing an unfinished line like a method call with specific parameters, a function signature, a loop condition, a variable definition and so on. When a software develop finish one or more tokens of the current line, the line level completion model is expected to generate the entire line of syntactically correct code. Line level code completion task shares the train/dev dataset with token level completion. After training a model on CodeCompletion-token, you could directly use it to test on line-level completion.
498
1
code_x_glue_cc_code_completion_token
false
[ "task_categories:text-generation", "task_categories:fill-mask", "task_ids:language-modeling", "task_ids:masked-language-modeling", "annotations_creators:found", "language_creators:found", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:original", "language:code", "license:c-uda" ]
Predict next code token given context of previous tokens. Models are evaluated by token level accuracy. Code completion is a one of the most widely used features in software development through IDEs. An effective code completion tool could improve software developers' productivity. We provide code completion evaluation tasks in two granularities -- token level and line level. Here we introduce token level code completion. Token level task is analogous to language modeling. Models should have be able to predict the next token in arbitary types.
413
0
code_x_glue_cc_code_refinement
false
[ "task_categories:text2text-generation", "annotations_creators:expert-generated", "language_creators:found", "multilinguality:other-programming-languages", "size_categories:10K<n<100K", "source_datasets:original", "language:code", "license:c-uda", "debugging", "arxiv:1812.08693" ]
We use the dataset released by this paper(https://arxiv.org/pdf/1812.08693.pdf). The source side is a Java function with bugs and the target side is the refined one. All the function and variable names are normalized. Their dataset contains two subsets ( i.e.small and medium) based on the function length.
1,336
0
code_x_glue_cc_code_to_code_trans
false
[ "task_categories:translation", "annotations_creators:expert-generated", "language_creators:found", "multilinguality:other-programming-languages", "size_categories:10K<n<100K", "source_datasets:original", "language:code", "license:c-uda", "code-to-code" ]
The dataset is collected from several public repos, including Lucene(http://lucene.apache.org/), POI(http://poi.apache.org/), JGit(https://github.com/eclipse/jgit/) and Antlr(https://github.com/antlr/). We collect both the Java and C# versions of the codes and find the parallel functions. After removing duplicates and functions with the empty body, we split the whole dataset into training, validation and test sets.
871
1
code_x_glue_cc_defect_detection
false
[ "task_categories:text-classification", "task_ids:multi-class-classification", "annotations_creators:found", "language_creators:found", "multilinguality:other-programming-languages", "size_categories:10K<n<100K", "source_datasets:original", "language:code", "license:c-uda" ]
Given a source code, the task is to identify whether it is an insecure code that may attack software systems, such as resource leaks, use-after-free vulnerabilities and DoS attack. We treat the task as binary classification (0/1), where 1 stands for insecure code and 0 for secure code. The dataset we use comes from the paper Devign: Effective Vulnerability Identification by Learning Comprehensive Program Semantics via Graph Neural Networks. We combine all projects and split 80%/10%/10% for training/dev/test.
568
5
code_x_glue_ct_code_to_text
false
[ "task_categories:translation", "annotations_creators:found", "language_creators:found", "multilinguality:other-programming-languages", "size_categories:100K<n<1M", "size_categories:10K<n<100K", "source_datasets:original", "language:code", "language:en", "license:c-uda", "code-to-text" ]
The dataset we use comes from CodeSearchNet and we filter the dataset as the following: - Remove examples that codes cannot be parsed into an abstract syntax tree. - Remove examples that #tokens of documents is < 3 or >256 - Remove examples that documents contain special tokens (e.g. <img ...> or https:...) - Remove examples that documents are not English.
1,621
21
code_x_glue_tc_nl_code_search_adv
false
[ "task_categories:text-retrieval", "task_ids:document-retrieval", "annotations_creators:found", "language_creators:found", "multilinguality:other-programming-languages", "size_categories:100K<n<1M", "source_datasets:original", "language:code", "language:en", "license:c-uda" ]
The dataset we use comes from CodeSearchNet and we filter the dataset as the following: - Remove examples that codes cannot be parsed into an abstract syntax tree. - Remove examples that #tokens of documents is < 3 or >256 - Remove examples that documents contain special tokens (e.g. <img ...> or https:...) - Remove examples that documents are not English.
297
0
code_x_glue_tc_text_to_code
false
[ "task_categories:translation", "annotations_creators:found", "language_creators:found", "multilinguality:other-programming-languages", "size_categories:100K<n<1M", "source_datasets:original", "language:code", "language:en", "license:c-uda", "text-to-code" ]
We use concode dataset which is a widely used code generation dataset from Iyer's EMNLP 2018 paper Mapping Language to Code in Programmatic Context. See paper for details.
1,155
7
code_x_glue_tt_text_to_text
false
[ "task_categories:translation", "annotations_creators:found", "language_creators:found", "multilinguality:multilingual", "size_categories:10K<n<100K", "source_datasets:original", "language:da", "language:en", "language:lv", "language:nb", "language:zh", "license:c-uda", "code-documentation-translation" ]
The dataset we use is crawled and filtered from Microsoft Documentation, whose document located at https://github.com/MicrosoftDocs/.
732
1
com_qa
false
[ "language:en" ]
ComQA is a dataset of 11,214 questions, which were collected from WikiAnswers, a community question answering website. By collecting questions from such a site we ensure that the information needs are ones of interest to actual users. Moreover, questions posed there are often cannot be answered by commercial search engines or QA technology, making them more interesting for driving future research compared to those collected from an engine's query log. The dataset contains questions with various challenging phenomena such as the need for temporal reasoning, comparison (e.g., comparatives, superlatives, ordinals), compositionality (multiple, possibly nested, subquestions with multiple entities), and unanswerable questions (e.g., Who was the first human being on Mars?). Through a large crowdsourcing effort, questions in ComQA are grouped into 4,834 paraphrase clusters that express the same information need. Each cluster is annotated with its answer(s). ComQA answers come in the form of Wikipedia entities wherever possible. Wherever the answers are temporal or measurable quantities, TIMEX3 and the International System of Units (SI) are used for normalization.
328
1
common_gen
false
[ "task_categories:text2text-generation", "annotations_creators:crowdsourced", "language_creators:found", "language_creators:crowdsourced", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:original", "language:en", "license:mit", "concepts-to-text", "arxiv:1911.03705" ]
CommonGen is a constrained text generation task, associated with a benchmark dataset, to explicitly test machines for the ability of generative commonsense reasoning. Given a set of common concepts; the task is to generate a coherent sentence describing an everyday scenario using these concepts. CommonGen is challenging because it inherently requires 1) relational reasoning using background commonsense knowledge, and 2) compositional generalization ability to work on unseen concept combinations. Our dataset, constructed through a combination of crowd-sourcing from AMT and existing caption corpora, consists of 30k concept-sets and 50k sentences in total.
31,531
8
common_language
false
[ "task_categories:audio-classification", "task_ids:speaker-identification", "annotations_creators:crowdsourced", "language_creators:crowdsourced", "multilinguality:multilingual", "size_categories:100K<n<1M", "source_datasets:extended|common_voice", "language:ar", "language:br", "language:ca", "language:cnh", "language:cs", "language:cv", "language:cy", "language:de", "language:dv", "language:el", "language:en", "language:eo", "language:es", "language:et", "language:eu", "language:fa", "language:fr", "language:fy", "language:ia", "language:id", "language:it", "language:ja", "language:ka", "language:kab", "language:ky", "language:lv", "language:mn", "language:mt", "language:nl", "language:pl", "language:pt", "language:rm", "language:ro", "language:ru", "language:rw", "language:sah", "language:sl", "language:sv", "language:ta", "language:tr", "language:tt", "language:uk", "language:zh", "license:cc-by-4.0" ]
This dataset is composed of speech recordings from languages that were carefully selected from the CommonVoice database. The total duration of audio recordings is 45.1 hours (i.e., 1 hour of material for each language). The dataset has been extracted from CommonVoice to train language-id systems.
657
6
common_voice
false
[ "task_categories:automatic-speech-recognition", "annotations_creators:crowdsourced", "language_creators:crowdsourced", "multilinguality:multilingual", "size_categories:100K<n<1M", "size_categories:10K<n<100K", "size_categories:1K<n<10K", "size_categories:n<1K", "source_datasets:extended|common_voice", "language:ab", "language:ar", "language:as", "language:br", "language:ca", "language:cnh", "language:cs", "language:cv", "language:cy", "language:de", "language:dv", "language:el", "language:en", "language:eo", "language:es", "language:et", "language:eu", "language:fa", "language:fi", "language:fr", "language:fy", "language:ga", "language:hi", "language:hsb", "language:hu", "language:ia", "language:id", "language:it", "language:ja", "language:ka", "language:kab", "language:ky", "language:lg", "language:lt", "language:lv", "language:mn", "language:mt", "language:nl", "language:or", "language:pa", "language:pl", "language:pt", "language:rm", "language:ro", "language:ru", "language:rw", "language:sah", "language:sl", "language:sv", "language:ta", "language:th", "language:tr", "language:tt", "language:uk", "language:vi", "language:vot", "language:zh", "license:cc0-1.0" ]
Common Voice is Mozilla's initiative to help teach machines how real people speak. The dataset currently consists of 7,335 validated hours of speech in 60 languages, but we’re always adding more voices and languages.
14,311
71
commonsense_qa
false
[ "task_categories:question-answering", "task_ids:open-domain-qa", "annotations_creators:crowdsourced", "language_creators:crowdsourced", "multilinguality:monolingual", "size_categories:1K<n<10K", "source_datasets:original", "language:en", "license:mit", "arxiv:1811.00937" ]
CommonsenseQA is a new multiple-choice question answering dataset that requires different types of commonsense knowledge to predict the correct answers . It contains 12,102 questions with one correct answer and four distractor answers. The dataset is provided in two major training/validation/testing set splits: "Random split" which is the main evaluation split, and "Question token split", see paper for details.
7,568
13
competition_math
false
[ "task_categories:text2text-generation", "annotations_creators:expert-generated", "language_creators:expert-generated", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:original", "language:en", "license:mit", "explanation-generation", "arxiv:2103.03874" ]
The Mathematics Aptitude Test of Heuristics (MATH) dataset consists of problems from mathematics competitions, including the AMC 10, AMC 12, AIME, and more. Each problem in MATH has a full step-by-step solution, which can be used to teach models to generate answer derivations and explanations.
5,099
6
compguesswhat
false
[ "task_categories:visual-question-answering", "task_ids:visual-question-answering", "annotations_creators:machine-generated", "language_creators:found", "multilinguality:monolingual", "size_categories:100K<n<1M", "source_datasets:extended|other-guesswhat", "language:en", "license:unknown" ]
CompGuessWhat?! is an instance of a multi-task framework for evaluating the quality of learned neural representations, in particular concerning attribute grounding. Use this dataset if you want to use the set of games whose reference scene is an image in VisualGenome. Visit the website for more details: https://compguesswhat.github.io
404
1
conceptnet5
false
[ "task_categories:text-classification", "task_ids:multi-class-classification", "annotations_creators:crowdsourced", "language_creators:crowdsourced", "language_creators:found", "multilinguality:monolingual", "size_categories:100K<n<1M", "size_categories:10M<n<100M", "size_categories:1M<n<10M", "source_datasets:original", "language:de", "language:en", "language:es", "language:fr", "language:it", "language:ja", "language:nl", "language:pt", "language:ru", "language:zh", "license:cc-by-4.0" ]
This dataset is designed to provide training data for common sense relationships pulls together from various sources. The dataset is multi-lingual. See langauge codes and language info here: https://github.com/commonsense/conceptnet5/wiki/Languages This dataset provides an interface for the conceptnet5 csv file, and some (but not all) of the raw text data used to build conceptnet5: omcsnet_sentences_free.txt, and omcsnet_sentences_more.txt. One use of this dataset would be to learn to extract the conceptnet relationship from the omcsnet sentences. Conceptnet5 has 34,074,917 relationships. Of those relationships, there are 2,176,099 surface text sentences related to those 2M entries. omcsnet_sentences_free has 898,161 lines. omcsnet_sentences_more has 2,001,736 lines. Original downloads are available here https://github.com/commonsense/conceptnet5/wiki/Downloads. For more information, see: https://github.com/commonsense/conceptnet5/wiki The omcsnet data comes with the following warning from the authors of the above site: Remember: this data comes from various forms of crowdsourcing. Sentences in these files are not necessarily true, useful, or appropriate.
626
9
conll2000
false
[ "language:en" ]
Text chunking consists of dividing a text in syntactically correlated parts of words. For example, the sentence He reckons the current account deficit will narrow to only # 1.8 billion in September . can be divided as follows: [NP He ] [VP reckons ] [NP the current account deficit ] [VP will narrow ] [PP to ] [NP only # 1.8 billion ] [PP in ] [NP September ] . Text chunking is an intermediate step towards full parsing. It was the shared task for CoNLL-2000. Training and test data for this task is available. This data consists of the same partitions of the Wall Street Journal corpus (WSJ) as the widely used data for noun phrase chunking: sections 15-18 as training data (211727 tokens) and section 20 as test data (47377 tokens). The annotation of the data has been derived from the WSJ corpus by a program written by Sabine Buchholz from Tilburg University, The Netherlands.
373
2
conll2002
false
[ "task_categories:token-classification", "task_ids:named-entity-recognition", "task_ids:part-of-speech", "annotations_creators:crowdsourced", "language_creators:found", "multilinguality:multilingual", "size_categories:10K<n<100K", "source_datasets:original", "language:es", "language:nl", "license:unknown" ]
Named entities are phrases that contain the names of persons, organizations, locations, times and quantities. Example: [PER Wolff] , currently a journalist in [LOC Argentina] , played with [PER Del Bosque] in the final years of the seventies in [ORG Real Madrid] . The shared task of CoNLL-2002 concerns language-independent named entity recognition. We will concentrate on four types of named entities: persons, locations, organizations and names of miscellaneous entities that do not belong to the previous three groups. The participants of the shared task will be offered training and test data for at least two languages. They will use the data for developing a named-entity recognition system that includes a machine learning component. Information sources other than the training data may be used in this shared task. We are especially interested in methods that can use additional unannotated data for improving their performance (for example co-training). The train/validation/test sets are available in Spanish and Dutch. For more details see https://www.clips.uantwerpen.be/conll2002/ner/ and https://www.aclweb.org/anthology/W02-2024/
932
0
conll2003
false
[ "task_categories:token-classification", "task_ids:named-entity-recognition", "task_ids:part-of-speech", "annotations_creators:crowdsourced", "language_creators:found", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:extended|other-reuters-corpus", "language:en", "license:other" ]
The shared task of CoNLL-2003 concerns language-independent named entity recognition. We will concentrate on four types of named entities: persons, locations, organizations and names of miscellaneous entities that do not belong to the previous three groups. The CoNLL-2003 shared task data files contain four columns separated by a single space. Each word has been put on a separate line and there is an empty line after each sentence. The first item on each line is a word, the second a part-of-speech (POS) tag, the third a syntactic chunk tag and the fourth the named entity tag. The chunk tags and the named entity tags have the format I-TYPE which means that the word is inside a phrase of type TYPE. Only if two phrases of the same type immediately follow each other, the first word of the second phrase will have tag B-TYPE to show that it starts a new phrase. A word with tag O is not part of a phrase. Note the dataset uses IOB2 tagging scheme, whereas the original dataset uses IOB1. For more details see https://www.clips.uantwerpen.be/conll2003/ner/ and https://www.aclweb.org/anthology/W03-0419
28,853
42
conllpp
false
[ "task_categories:token-classification", "task_ids:named-entity-recognition", "annotations_creators:expert-generated", "language_creators:found", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:extended|conll2003", "language:en", "license:unknown" ]
CoNLLpp is a corrected version of the CoNLL2003 NER dataset where labels of 5.38% of the sentences in the test set have been manually corrected. The training set and development set are included for completeness. For more details see https://www.aclweb.org/anthology/D19-1519/ and https://github.com/ZihanWangKi/CrossWeigh
1,806
2
consumer-finance-complaints
false
[ "task_categories:text-classification", "task_ids:topic-classification", "annotations_creators:crowdsourced", "language_creators:crowdsourced", "multilinguality:monolingual", "size_categories:1M<n<10M", "source_datasets:original", "language:en", "license:cc0-1.0" ]
null
312
4
conv_ai
false
[ "task_categories:conversational", "task_categories:text-classification", "task_ids:text-scoring", "annotations_creators:crowdsourced", "language_creators:crowdsourced", "multilinguality:monolingual", "size_categories:1K<n<10K", "source_datasets:original", "language:en", "license:unknown", "evaluating-dialogue-systems" ]
ConvAI is a dataset of human-to-bot conversations labelled for quality. This data can be used to train a metric for evaluating dialogue systems. Moreover, it can be used in the development of chatbots themselves: it contains the information on the quality of utterances and entire dialogues, that can guide a dialogue system in search of better answers.
1,231
2
conv_ai_2
false
[ "task_categories:conversational", "task_categories:text-classification", "task_ids:text-scoring", "annotations_creators:crowdsourced", "language_creators:crowdsourced", "multilinguality:monolingual", "size_categories:1K<n<10K", "source_datasets:original", "language:en", "license:unknown", "evaluating-dialogue-systems", "arxiv:1902.00098" ]
ConvAI is a dataset of human-to-bot conversations labelled for quality. This data can be used to train a metric for evaluating dialogue systems. Moreover, it can be used in the development of chatbots themselves: it contains the information on the quality of utterances and entire dialogues, that can guide a dialogue system in search of better answers.
1,642
9
conv_ai_3
false
[ "task_categories:conversational", "task_categories:text-classification", "task_ids:text-scoring", "annotations_creators:crowdsourced", "language_creators:crowdsourced", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:original", "language:en", "license:unknown", "evaluating-dialogue-systems", "arxiv:2009.11352" ]
The Conv AI 3 challenge is organized as part of the Search-oriented Conversational AI (SCAI) EMNLP workshop in 2020. The main aim of the conversational systems is to return an appropriate answer in response to the user requests. However, some user requests might be ambiguous. In Information Retrieval (IR) settings such a situation is handled mainly through the diversification of search result page. It is however much more challenging in dialogue settings. Hence, we aim to study the following situation for dialogue settings: - a user is asking an ambiguous question (where ambiguous question is a question to which one can return > 1 possible answers) - the system must identify that the question is ambiguous, and, instead of trying to answer it directly, ask a good clarifying question.
1,297
10
conv_questions
false
[ "task_categories:question-answering", "task_categories:text-generation", "task_categories:fill-mask", "task_ids:open-domain-qa", "task_ids:dialogue-modeling", "annotations_creators:crowdsourced", "language_creators:crowdsourced", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:original", "language:en", "license:cc-by-4.0", "arxiv:1910.03262" ]
ConvQuestions is the first realistic benchmark for conversational question answering over knowledge graphs. It contains 11,200 conversations which can be evaluated over Wikidata. The questions feature a variety of complex question phenomena like comparisons, aggregations, compositionality, and temporal reasoning.
273
1
coqa
false
[ "task_categories:question-answering", "task_ids:extractive-qa", "annotations_creators:crowdsourced", "language_creators:found", "multilinguality:monolingual", "size_categories:1K<n<10K", "source_datasets:extended|race", "source_datasets:extended|cnn_dailymail", "source_datasets:extended|wikipedia", "source_datasets:extended|other", "language:en", "license:other", "conversational-qa", "arxiv:1808.07042", "arxiv:1704.04683", "arxiv:1506.03340" ]
CoQA: A Conversational Question Answering Challenge
910
9
allenai/cord19
false
[ "task_categories:other", "annotations_creators:no-annotation", "language_creators:found", "multilinguality:monolingual", "size_categories:100K<n<1M", "source_datasets:original", "language:en", "license:cc-by-nd-4.0", "license:cc-by-sa-4.0", "license:other", "arxiv:2004.07180" ]
The Covid-19 Open Research Dataset (CORD-19) is a growing resource of scientific papers on Covid-19 and related historical coronavirus research. CORD-19 is designed to facilitate the development of text mining and information retrieval systems over its rich collection of metadata and structured full text papers. Since its release, CORD-19 has been downloaded over 75K times and has served as the basis of many Covid-19 text mining and discovery systems. The dataset itself isn't defining a specific task, but there is a Kaggle challenge that define 17 open research questions to be solved with the dataset: https://www.kaggle.com/allen-institute-for-ai/CORD-19-research-challenge/tasks
702
1
cornell_movie_dialog
false
[ "language:en" ]
This corpus contains a large metadata-rich collection of fictional conversations extracted from raw movie scripts: - 220,579 conversational exchanges between 10,292 pairs of movie characters - involves 9,035 characters from 617 movies - in total 304,713 utterances - movie metadata included: - genres - release year - IMDB rating - number of IMDB votes - IMDB rating - character metadata included: - gender (for 3,774 characters) - position on movie credits (3,321 characters)
647
5
cos_e
false
[ "task_categories:question-answering", "task_ids:open-domain-qa", "annotations_creators:crowdsourced", "language_creators:crowdsourced", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:extended|commonsense_qa", "language:en", "license:unknown", "arxiv:1906.02361" ]
Common Sense Explanations (CoS-E) allows for training language models to automatically generate explanations that can be used during training and inference in a novel Commonsense Auto-Generated Explanation (CAGE) framework.
16,365
3
cosmos_qa
false
[ "task_categories:multiple-choice", "task_ids:multiple-choice-qa", "annotations_creators:crowdsourced", "language_creators:found", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:original", "language:en", "license:cc-by-4.0", "arxiv:1909.00277" ]
Cosmos QA is a large-scale dataset of 35.6K problems that require commonsense-based reading comprehension, formulated as multiple-choice questions. It focuses on reading between the lines over a diverse collection of people's everyday narratives, asking questions concerning on the likely causes or effects of events that require reasoning beyond the exact text spans in the context
15,522
4
counter
false
[ "task_categories:text-classification", "task_ids:text-scoring", "task_ids:semantic-similarity-scoring", "task_ids:topic-classification", "annotations_creators:expert-generated", "language_creators:expert-generated", "multilinguality:monolingual", "size_categories:n<1K", "source_datasets:original", "language:ur", "license:cc-by-nc-sa-4.0" ]
The COrpus of Urdu News TExt Reuse (COUNTER) corpus contains 1200 documents with real examples of text reuse from the field of journalism. It has been manually annotated at document level with three levels of reuse: wholly derived, partially derived and non derived.
274
0
covid_qa_castorini
false
[ "task_categories:question-answering", "task_ids:open-domain-qa", "task_ids:extractive-qa", "annotations_creators:expert-generated", "language_creators:found", "multilinguality:monolingual", "size_categories:1K<n<10K", "source_datasets:original", "language:en", "license:apache-2.0", "arxiv:2004.11339" ]
CovidQA is the beginnings of a question answering dataset specifically designed for COVID-19, built by hand from knowledge gathered from Kaggle's COVID-19 Open Research Dataset Challenge.
1,183
0
covid_qa_deepset
false
[ "task_categories:question-answering", "task_ids:closed-domain-qa", "task_ids:extractive-qa", "annotations_creators:expert-generated", "language_creators:found", "multilinguality:monolingual", "size_categories:1K<n<10K", "source_datasets:original", "language:en", "license:apache-2.0" ]
COVID-QA is a Question Answering dataset consisting of 2,019 question/answer pairs annotated by volunteer biomedical experts on scientific articles related to COVID-19.
713
1
covid_qa_ucsd
false
[ "task_categories:question-answering", "task_ids:closed-domain-qa", "annotations_creators:found", "language_creators:expert-generated", "language_creators:found", "multilinguality:monolingual", "size_categories:1K<n<10K", "size_categories:n<1K", "source_datasets:original", "language:en", "language:zh", "license:unknown", "arxiv:2005.05442" ]
null
411
1
covid_tweets_japanese
false
[ "task_categories:text-classification", "task_ids:fact-checking", "annotations_creators:crowdsourced", "language_creators:found", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:original", "language:ja", "license:cc-by-nd-4.0" ]
53,640 Japanese tweets with annotation if a tweet is related to COVID-19 or not. The annotation is by majority decision by 5 - 10 crowd workers. Target tweets include "COVID" or "コロナ". The period of the tweets is from around January 2020 to around June 2020. The original tweets are not contained. Please use Twitter API to get them, for example.
271
0
covost2
false
[ "task_categories:automatic-speech-recognition", "annotations_creators:expert-generated", "language_creators:crowdsourced", "language_creators:expert-generated", "multilinguality:multilingual", "size_categories:100K<n<1M", "source_datasets:extended|other-common-voice", "language:ar", "language:ca", "language:cy", "language:de", "language:es", "language:et", "language:fa", "language:fr", "language:id", "language:it", "language:ja", "language:lv", "language:mn", "language:nl", "language:pt", "language:ru", "language:sl", "language:sv", "language:ta", "language:tr", "language:zh", "license:cc-by-nc-4.0", "arxiv:2007.10310" ]
CoVoST 2, a large-scale multilingual speech translation corpus covering translations from 21 languages into English and from English into 15 languages. The dataset is created using Mozilla’s open source Common Voice database of crowdsourced voice recordings. Note that in order to limit the required storage for preparing this dataset, the audio is stored in the .mp3 format and is not converted to a float32 array. To convert, the audio file to a float32 array, please make use of the `.map()` function as follows: ```python import torchaudio def map_to_array(batch): speech_array, _ = torchaudio.load(batch["file"]) batch["speech"] = speech_array.numpy() return batch dataset = dataset.map(map_to_array, remove_columns=["file"]) ```
5,550
4
cppe-5
false
[ "task_categories:object-detection", "annotations_creators:crowdsourced", "language_creators:found", "multilinguality:monolingual", "size_categories:1K<n<10K", "source_datasets:original", "language:en", "license:unknown", "medical-personal-protective-equipment-detection", "arxiv:2112.09569" ]
CPPE - 5 (Medical Personal Protective Equipment) is a new challenging dataset with the goal to allow the study of subordinate categorization of medical personal protective equipments, which is not possible with other popular data sets that focus on broad level categories.
1,179
3
craigslist_bargains
false
[ "task_categories:text-generation", "task_categories:fill-mask", "task_ids:dialogue-modeling", "annotations_creators:machine-generated", "language_creators:crowdsourced", "multilinguality:monolingual", "size_categories:1K<n<10K", "source_datasets:original", "language:en", "license:unknown", "arxiv:1808.09637" ]
We study negotiation dialogues where two agents, a buyer and a seller, negotiate over the price of an time for sale. We collected a dataset of more than 6K negotiation dialogues over multiple categories of products scraped from Craigslist. Our goal is to develop an agent that negotiates with humans through such conversations. The challenge is to handle both the negotiation strategy and the rich language for bargaining.
1,222
2
crawl_domain
false
[ "task_categories:other", "annotations_creators:expert-generated", "language_creators:crowdsourced", "language_creators:expert-generated", "language_creators:found", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:extended|other-Common-Crawl", "source_datasets:original", "language:en", "license:mit", "web-search", "text-to-speech", "arxiv:2011.03138" ]
Corpus of domain names scraped from Common Crawl and manually annotated to add word boundaries (e.g. "commoncrawl" to "common crawl"). Breaking domain names such as "openresearch" into component words "open" and "research" is important for applications such as Text-to-Speech synthesis and web search. Common Crawl is an open repository of web crawl data that can be accessed and analyzed by anyone. Specifically, we scraped the plaintext (WET) extracts for domain names from URLs that contained diverse letter casing (e.g. "OpenBSD"). Although in the previous example, segmentation is trivial using letter casing, this was not always the case (e.g. "NASA"), so we had to manually annotate the data. The dataset is stored as plaintext file where each line is an example of space separated segments of a domain name. The examples are stored in their original letter casing, but harder and more interesting examples can be generated by lowercasing the input first.
278
0
crd3
false
[ "task_categories:summarization", "task_categories:text-generation", "task_categories:fill-mask", "task_ids:dialogue-modeling", "annotations_creators:no-annotation", "language_creators:crowdsourced", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:original", "language:en", "license:cc-by-sa-4.0" ]
Storytelling with Dialogue: A Critical Role Dungeons and Dragons Dataset. Critical Role is an unscripted, live-streamed show where a fixed group of people play Dungeons and Dragons, an open-ended role-playing game. The dataset is collected from 159 Critical Role episodes transcribed to text dialogues, consisting of 398,682 turns. It also includes corresponding abstractive summaries collected from the Fandom wiki. The dataset is linguistically unique in that the narratives are generated entirely through player collaboration and spoken interaction. For each dialogue, there are a large number of turns, multiple abstractive summaries with varying levels of detail, and semantic ties to the previous dialogues.
404
6
crime_and_punish
false
[ "language:en" ]
\
1,325
1
crows_pairs
false
[ "task_categories:text-classification", "task_ids:text-scoring", "annotations_creators:crowdsourced", "language_creators:crowdsourced", "multilinguality:monolingual", "size_categories:1K<n<10K", "source_datasets:original", "language:en", "license:cc-by-sa-4.0", "bias-evaluation" ]
CrowS-Pairs, a challenge dataset for measuring the degree to which U.S. stereotypical biases present in the masked language models (MLMs).
5,241
1
cryptonite
false
[ "task_categories:question-answering", "task_ids:open-domain-qa", "annotations_creators:expert-generated", "language_creators:expert-generated", "multilinguality:monolingual", "size_categories:100K<n<1M", "size_categories:1K<n<10K", "source_datasets:original", "language:en", "license:cc-by-nc-4.0", "arxiv:2103.01242" ]
Cryptonite: A Cryptic Crossword Benchmark for Extreme Ambiguity in Language Current NLP datasets targeting ambiguity can be solved by a native speaker with relative ease. We present Cryptonite, a large-scale dataset based on cryptic crosswords, which is both linguistically complex and naturally sourced. Each example in Cryptonite is a cryptic clue, a short phrase or sentence with a misleading surface reading, whose solving requires disambiguating semantic, syntactic, and phonetic wordplays, as well as world knowledge. Cryptic clues pose a challenge even for experienced solvers, though top-tier experts can solve them with almost 100% accuracy. Cryptonite is a challenging task for current models; fine-tuning T5-Large on 470k cryptic clues achieves only 7.6% accuracy, on par with the accuracy of a rule-based clue solver (8.6%).
273
2
cs_restaurants
false
[ "task_categories:text2text-generation", "task_categories:text-generation", "task_categories:fill-mask", "task_ids:dialogue-modeling", "task_ids:language-modeling", "task_ids:masked-language-modeling", "annotations_creators:found", "language_creators:expert-generated", "language_creators:machine-generated", "multilinguality:monolingual", "size_categories:1K<n<10K", "source_datasets:extended|other-san-francisco-restaurants", "language:cs", "license:cc-by-4.0", "intent-to-text", "arxiv:1910.05298" ]
This is a dataset for NLG in task-oriented spoken dialogue systems with Czech as the target language. It originated as a translation of the English San Francisco Restaurants dataset by Wen et al. (2015).
270
0
cuad
false
[ "task_categories:question-answering", "task_ids:closed-domain-qa", "task_ids:extractive-qa", "annotations_creators:expert-generated", "language_creators:found", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:original", "language:en", "license:cc-by-4.0", "arxiv:2103.06268" ]
Contract Understanding Atticus Dataset (CUAD) v1 is a corpus of more than 13,000 labels in 510 commercial legal contracts that have been manually labeled to identify 41 categories of important clauses that lawyers look for when reviewing contracts in connection with corporate transactions.
1,086
18
curiosity_dialogs
false
[ "task_categories:text-generation", "task_categories:fill-mask", "task_ids:dialogue-modeling", "annotations_creators:crowdsourced", "language_creators:crowdsourced", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:original", "language:en", "license:cc-by-nc-4.0", "conversational-curiosity" ]
This dataset contains 14K dialogs (181K utterances) where users and assistants converse about geographic topics like geopolitical entities and locations. This dataset is annotated with pre-existing user knowledge, message-level dialog acts, grounding to Wikipedia, and user reactions to messages.
295
3
daily_dialog
false
[ "task_categories:text-classification", "task_ids:multi-label-classification", "annotations_creators:expert-generated", "language_creators:found", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:original", "language:en", "license:cc-by-nc-sa-4.0", "emotion-classification", "dialog-act-classification" ]
We develop a high-quality multi-turn dialog dataset, DailyDialog, which is intriguing in several aspects. The language is human-written and less noisy. The dialogues in the dataset reflect our daily communication way and cover various topics about our daily life. We also manually label the developed dataset with communication intention and emotion information. Then, we evaluate existing approaches on DailyDialog dataset and hope it benefit the research field of dialog systems.
3,829
41
dane
false
[ "task_categories:token-classification", "task_ids:named-entity-recognition", "task_ids:part-of-speech", "annotations_creators:expert-generated", "language_creators:found", "multilinguality:monolingual", "size_categories:1K<n<10K", "source_datasets:extended|other-Danish-Universal-Dependencies-treebank", "language:da", "license:cc-by-sa-4.0" ]
The DaNE dataset has been annotated with Named Entities for PER, ORG and LOC by the Alexandra Institute. It is a reannotation of the UD-DDT (Universal Dependency - Danish Dependency Treebank) which has annotations for dependency parsing and part-of-speech (POS) tagging. The Danish UD treebank (Johannsen et al., 2015, UD-DDT) is a conversion of the Danish Dependency Treebank (Buch-Kromann et al. 2003) based on texts from Parole (Britt, 1998).
323
3
danish_political_comments
false
[ "task_categories:text-classification", "task_ids:multi-class-classification", "annotations_creators:expert-generated", "language_creators:other", "multilinguality:monolingual", "size_categories:1K<n<10K", "source_datasets:original", "language:da", "license:unknown" ]
The dataset consists of 9008 sentences that are labelled with fine-grained polarity in the range from -2 to 2 (negative to postive). The quality of the fine-grained is not cross validated and is therefore subject to uncertainties; however, the simple polarity has been cross validated and therefore is considered to be more correct.
269
0
dart
false
[ "task_categories:tabular-to-text", "task_ids:rdf-to-text", "annotations_creators:crowdsourced", "annotations_creators:machine-generated", "language_creators:crowdsourced", "language_creators:machine-generated", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:extended|wikitable_questions", "source_datasets:extended|wikisql", "source_datasets:extended|web_nlg", "source_datasets:extended|cleaned_e2e", "language:en", "license:mit", "arxiv:2007.02871" ]
DART is a large and open-domain structured DAta Record to Text generation corpus with high-quality sentence annotations with each input being a set of entity-relation triples following a tree-structured ontology. It consists of 82191 examples across different domains with each input being a semantic RDF triple set derived from data records in tables and the tree ontology of table schema, annotated with sentence description that covers all facts in the triple set. DART is released in the following paper where you can find more details and baseline results: https://arxiv.org/abs/2007.02871
306
3
datacommons_factcheck
false
[ "task_categories:text-classification", "task_ids:fact-checking", "annotations_creators:expert-generated", "language_creators:found", "multilinguality:monolingual", "size_categories:1K<n<10K", "size_categories:n<1K", "source_datasets:original", "language:en", "license:cc-by-nc-4.0" ]
A dataset of fact checked claims by news media maintained by datacommons.org
406
0
dbpedia_14
false
[ "task_categories:text-classification", "task_ids:topic-classification", "annotations_creators:machine-generated", "language_creators:crowdsourced", "multilinguality:monolingual", "size_categories:100K<n<1M", "source_datasets:original", "language:en", "license:cc-by-sa-3.0" ]
The DBpedia ontology classification dataset is constructed by picking 14 non-overlapping classes from DBpedia 2014. They are listed in classes.txt. From each of thse 14 ontology classes, we randomly choose 40,000 training samples and 5,000 testing samples. Therefore, the total size of the training dataset is 560,000 and testing dataset 70,000. There are 3 columns in the dataset (same for train and test splits), corresponding to class index (1 to 14), title and content. The title and content are escaped using double quotes ("), and any internal double quote is escaped by 2 double quotes (""). There are no new lines in title or content.
11,214
6
dbrd
false
[ "task_categories:text-generation", "task_categories:fill-mask", "task_categories:text-classification", "task_ids:language-modeling", "task_ids:masked-language-modeling", "task_ids:sentiment-classification", "annotations_creators:found", "language_creators:found", "multilinguality:monolingual", "size_categories:100K<n<1M", "source_datasets:original", "language:nl", "license:cc-by-nc-sa-4.0", "arxiv:1910.00896" ]
The Dutch Book Review Dataset (DBRD) contains over 110k book reviews of which 22k have associated binary sentiment polarity labels. It is intended as a benchmark for sentiment classification in Dutch and created due to a lack of annotated datasets in Dutch that are suitable for this task.
305
2
deal_or_no_dialog
false
[ "task_categories:conversational", "annotations_creators:crowdsourced", "language_creators:crowdsourced", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:original", "language:en", "license:cc-by-4.0", "arxiv:1706.05125" ]
A large dataset of human-human negotiations on a multi-issue bargaining task, where agents who cannot observe each other’s reward functions must reach anagreement (o a deal) via natural language dialogue.
411
2
definite_pronoun_resolution
false
[ "task_categories:token-classification", "task_ids:word-sense-disambiguation", "annotations_creators:expert-generated", "language_creators:crowdsourced", "multilinguality:monolingual", "size_categories:1K<n<10K", "source_datasets:original", "language:en", "license:unknown" ]
Composed by 30 students from one of the author's undergraduate classes. These sentence pairs cover topics ranging from real events (e.g., Iran's plan to attack the Saudi ambassador to the U.S.) to events/characters in movies (e.g., Batman) and purely imaginary situations, largely reflecting the pop culture as perceived by the American kids born in the early 90s. Each annotated example spans four lines: the first line contains the sentence, the second line contains the target pronoun, the third line contains the two candidate antecedents, and the fourth line contains the correct antecedent. If the target pronoun appears more than once in the sentence, its first occurrence is the one to be resolved.
314
3
dengue_filipino
false
[ "task_categories:text-classification", "task_ids:multi-class-classification", "annotations_creators:crowdsourced", "annotations_creators:machine-generated", "language_creators:crowdsourced", "multilinguality:monolingual", "size_categories:1K<n<10K", "source_datasets:original", "language:tl", "license:unknown" ]
Benchmark dataset for low-resource multiclass classification, with 4,015 training, 500 testing, and 500 validation examples, each labeled as part of five classes. Each sample can be a part of multiple classes. Collected as tweets.
270
0
dialog_re
false
[ "task_categories:other", "task_categories:text-generation", "task_categories:fill-mask", "task_ids:dialogue-modeling", "annotations_creators:expert-generated", "language_creators:expert-generated", "multilinguality:monolingual", "size_categories:1K<n<10K", "source_datasets:original", "language:en", "license:other", "relation-extraction", "arxiv:2004.08056" ]
DialogRE is the first human-annotated dialogue based relation extraction (RE) dataset aiming to support the prediction of relation(s) between two arguments that appear in a dialogue. The dataset annotates all occurrences of 36 possible relation types that exist between pairs of arguments in the 1,788 dialogues originating from the complete transcripts of Friends.
300
6
diplomacy_detection
false
[ "task_categories:text-classification", "task_ids:intent-classification", "annotations_creators:found", "language_creators:found", "multilinguality:monolingual", "size_categories:n<1K", "source_datasets:original", "language:en", "license:unknown" ]
null
273
0
disaster_response_messages
false
[ "task_categories:text2text-generation", "task_categories:text-classification", "task_ids:intent-classification", "task_ids:sentiment-classification", "task_ids:text-simplification", "annotations_creators:expert-generated", "language_creators:crowdsourced", "multilinguality:multilingual", "size_categories:10K<n<100K", "source_datasets:original", "language:en", "language:es", "language:fr", "language:ht", "language:ur", "license:unknown" ]
This dataset contains 30,000 messages drawn from events including an earthquake in Haiti in 2010, an earthquake in Chile in 2010, floods in Pakistan in 2010, super-storm Sandy in the U.S.A. in 2012, and news articles spanning a large number of years and 100s of different disasters. The data has been encoded with 36 different categories related to disaster response and has been stripped of messages with sensitive information in their entirety. Upon release, this is the featured dataset of a new Udacity course on Data Science and the AI4ALL summer school and is especially utile for text analytics and natural language processing (NLP) tasks and models. The input data in this job contains thousands of untranslated disaster-related messages and their English translations.
288
2
discofuse
false
[ "task_categories:text2text-generation", "annotations_creators:machine-generated", "language_creators:found", "multilinguality:monolingual", "size_categories:10M<n<100M", "source_datasets:original", "language:en", "license:cc-by-sa-3.0", "sentence-fusion", "arxiv:1902.10526" ]
DISCOFUSE is a large scale dataset for discourse-based sentence fusion.
1,419
0
discovery
false
[ "task_categories:text-classification", "annotations_creators:other", "language_creators:other", "multilinguality:monolingual", "size_categories:10K<n<100K", "size_categories:1M<n<10M", "source_datasets:original", "language:en", "license:unknown", "discourse-marker-prediction" ]
null
1,354
4
disfl_qa
false
[ "task_categories:question-answering", "task_ids:extractive-qa", "task_ids:open-domain-qa", "annotations_creators:expert-generated", "language_creators:found", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:original", "language:en", "license:cc-by-4.0", "arxiv:2106.04016" ]
Disfl-QA is a targeted dataset for contextual disfluencies in an information seeking setting, namely question answering over Wikipedia passages. Disfl-QA builds upon the SQuAD-v2 (Rajpurkar et al., 2018) dataset, where each question in the dev set is annotated to add a contextual disfluency using the paragraph as a source of distractors. The final dataset consists of ~12k (disfluent question, answer) pairs. Over 90% of the disfluencies are corrections or restarts, making it a much harder test set for disfluency correction. Disfl-QA aims to fill a major gap between speech and NLP research community. We hope the dataset can serve as a benchmark dataset for testing robustness of models against disfluent inputs. Our expriments reveal that the state-of-the-art models are brittle when subjected to disfluent inputs from Disfl-QA. Detailed experiments and analyses can be found in our paper.
277
0
doc2dial
false
[ "task_categories:question-answering", "task_ids:closed-domain-qa", "annotations_creators:expert-generated", "language_creators:found", "multilinguality:monolingual", "size_categories:1K<n<10K", "source_datasets:original", "language:en", "license:cc-by-3.0" ]
Doc2dial is dataset of goal-oriented dialogues that are grounded in the associated documents. It includes over 4500 annotated conversations with an average of 14 turns that are grounded in over 450 documents from four domains. Compared to the prior document-grounded dialogue datasets this dataset covers a variety of dialogue scenes in information-seeking conversations.
570
1
docred
false
[ "task_categories:text-retrieval", "task_ids:entity-linking-retrieval", "annotations_creators:expert-generated", "language_creators:crowdsourced", "multilinguality:monolingual", "size_categories:100K<n<1M", "source_datasets:original", "language:en", "license:mit", "arxiv:1906.06127" ]
Multiple entities in a document generally exhibit complex inter-sentence relations, and cannot be well handled by existing relation extraction (RE) methods that typically focus on extracting intra-sentence relations for single entity pairs. In order to accelerate the research on document-level RE, we introduce DocRED, a new dataset constructed from Wikipedia and Wikidata with three features: - DocRED annotates both named entities and relations, and is the largest human-annotated dataset for document-level RE from plain text. - DocRED requires reading multiple sentences in a document to extract entities and infer their relations by synthesizing all information of the document. - Along with the human-annotated data, we also offer large-scale distantly supervised data, which enables DocRED to be adopted for both supervised and weakly supervised scenarios.
1,488
3
doqa
false
[ "language:en", "arxiv:2005.01328" ]
DoQA is a dataset for accessing Domain Specific FAQs via conversational QA that contains 2,437 information-seeking question/answer dialogues (10,917 questions in total) on three different domains: cooking, travel and movies. Note that we include in the generic concept of FAQs also Community Question Answering sites, as well as corporate information in intranets which is maintained in textual form similar to FAQs, often referred to as internal “knowledge bases”. These dialogues are created by crowd workers that play the following two roles: the user who asks questions about a given topic posted in Stack Exchange (https://stackexchange.com/), and the domain expert who replies to the questions by selecting a short span of text from the long textual reply in the original post. The expert can rephrase the selected span, in order to make it look more natural. The dataset covers unanswerable questions and some relevant dialogue acts. DoQA enables the development and evaluation of conversational QA systems that help users access the knowledge buried in domain specific FAQs.
536
0
dream
false
[ "task_categories:question-answering", "task_ids:multiple-choice-qa", "annotations_creators:expert-generated", "language_creators:expert-generated", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:original", "language:en", "license:unknown" ]
DREAM is a multiple-choice Dialogue-based REAding comprehension exaMination dataset. In contrast to existing reading comprehension datasets, DREAM is the first to focus on in-depth multi-turn multi-party dialogue understanding.
7,745
3
drop
false
[ "task_categories:question-answering", "task_categories:text2text-generation", "task_ids:extractive-qa", "task_ids:abstractive-qa", "annotations_creators:crowdsourced", "language_creators:crowdsourced", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:original", "language:en", "license:cc-by-sa-4.0" ]
DROP: A Reading Comprehension Benchmark Requiring Discrete Reasoning Over Paragraphs. . DROP is a crowdsourced, adversarially-created, 96k-question benchmark, in which a system must resolve references in a question, perhaps to multiple input positions, and perform discrete operations over them (such as addition, counting, or sorting). These operations require a much more comprehensive understanding of the content of paragraphs than what was necessary for prior datasets.
1,491
4
duorc
false
[ "task_categories:question-answering", "task_categories:text2text-generation", "task_ids:abstractive-qa", "task_ids:extractive-qa", "annotations_creators:crowdsourced", "language_creators:crowdsourced", "multilinguality:monolingual", "size_categories:100K<n<1M", "size_categories:10K<n<100K", "source_datasets:original", "language:en", "license:mit", "arxiv:1804.07927" ]
DuoRC contains 186,089 unique question-answer pairs created from a collection of 7680 pairs of movie plots where each pair in the collection reflects two versions of the same movie.
30,033
14
dutch_social
false
[ "task_categories:text-classification", "task_ids:sentiment-classification", "task_ids:multi-label-classification", "annotations_creators:machine-generated", "language_creators:crowdsourced", "multilinguality:multilingual", "size_categories:100K<n<1M", "source_datasets:original", "language:en", "language:nl", "license:cc-by-nc-4.0" ]
The dataset contains around 271,342 tweets. The tweets are filtered via the official Twitter API to contain tweets in Dutch language or by users who have specified their location information within Netherlands geographical boundaries. Using natural language processing we have classified the tweets for their HISCO codes. If the user has provided their location within Dutch boundaries, we have also classified them to their respective provinces The objective of this dataset is to make research data available publicly in a FAIR (Findable, Accessible, Interoperable, Reusable) way. Twitter's Terms of Service Licensed under Attribution-NonCommercial 4.0 International (CC BY-NC 4.0) (2020-10-27)
435
4
dyk
false
[ "task_categories:question-answering", "task_ids:open-domain-qa", "annotations_creators:expert-generated", "language_creators:other", "multilinguality:monolingual", "size_categories:1K<n<10K", "source_datasets:original", "language:pl", "license:bsd-3-clause" ]
The Did You Know (pol. Czy wiesz?) dataset consists of human-annotated question-answer pairs. The task is to predict if the answer is correct. We chose the negatives which have the largest token overlap with a question.
269
0
e2e_nlg
false
[ "task_categories:text2text-generation", "annotations_creators:crowdsourced", "language_creators:crowdsourced", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:original", "language:en", "license:cc-by-sa-4.0", "meaning-representation-to-text", "arxiv:1706.09254", "arxiv:1901.11528" ]
The E2E dataset is used for training end-to-end, data-driven natural language generation systems in the restaurant domain, which is ten times bigger than existing, frequently used datasets in this area. The E2E dataset poses new challenges: (1) its human reference texts show more lexical richness and syntactic variation, including discourse phenomena; (2) generating from this set requires content selection. As such, learning from this dataset promises more natural, varied and less template-like system utterances. E2E is released in the following paper where you can find more details and baseline results: https://arxiv.org/abs/1706.09254
663
2
e2e_nlg_cleaned
false
[ "task_categories:text2text-generation", "annotations_creators:crowdsourced", "language_creators:crowdsourced", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:original", "language:en", "license:cc-by-sa-4.0", "meaning-representation-to-text", "arxiv:1706.09254", "arxiv:1901.11528" ]
An update release of E2E NLG Challenge data with cleaned MRs and scripts, accompanying the following paper: Ondřej Dušek, David M. Howcroft, and Verena Rieser (2019): Semantic Noise Matters for Neural Natural Language Generation. In INLG, Tokyo, Japan.
1,161
2
ecb
false
[ "task_categories:translation", "annotations_creators:found", "language_creators:found", "multilinguality:multilingual", "size_categories:100K<n<1M", "source_datasets:original", "language:cs", "language:da", "language:de", "language:el", "language:en", "language:es", "language:et", "language:fi", "language:fr", "language:hu", "language:it", "language:lt", "language:lv", "language:mt", "language:nl", "language:pl", "language:pt", "language:sk", "language:sl", "license:unknown" ]
Original source: Website and documentatuion from the European Central Bank, compiled and made available by Alberto Simoes (thank you very much!) 19 languages, 170 bitexts total number of files: 340 total number of tokens: 757.37M total number of sentence fragments: 30.55M
796
0
ecthr_cases
false
[ "task_categories:text-classification", "task_ids:multi-label-classification", "annotations_creators:expert-generated", "annotations_creators:found", "language_creators:found", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:original", "language:en", "license:cc-by-nc-sa-4.0", "rationale-extraction", "legal-judgment-prediction", "arxiv:2103.13084" ]
The ECtHR Cases dataset is designed for experimentation of neural judgment prediction and rationale extraction considering ECtHR cases.
2,294
8
eduge
false
[ "task_categories:text-classification", "task_ids:multi-class-classification", "annotations_creators:expert-generated", "language_creators:expert-generated", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:original", "language:mn", "license:unknown" ]
Eduge news classification dataset is provided by Bolorsoft LLC. It is used for training the Eduge.mn production news classifier 75K news articles in 9 categories: урлаг соёл, эдийн засаг, эрүүл мэнд, хууль, улс төр, спорт, технологи, боловсрол and байгал орчин
269
3
ehealth_kd
false
[ "task_categories:token-classification", "task_ids:named-entity-recognition", "annotations_creators:expert-generated", "language_creators:expert-generated", "multilinguality:monolingual", "size_categories:1K<n<10K", "source_datasets:original", "language:es", "license:cc-by-nc-sa-4.0", "relation-prediction" ]
Dataset of the eHealth Knowledge Discovery Challenge at IberLEF 2020. It is designed for the identification of semantic entities and relations in Spanish health documents.
267
1
eitb_parcc
false
[ "task_categories:translation", "annotations_creators:found", "language_creators:found", "multilinguality:multilingual", "size_categories:100K<n<1M", "source_datasets:original", "language:es", "language:eu", "license:unknown" ]
EiTB-ParCC: Parallel Corpus of Comparable News. A Basque-Spanish parallel corpus provided by Vicomtech (https://www.vicomtech.org), extracted from comparable news produced by the Basque public broadcasting group Euskal Irrati Telebista.
269
1
electricity_load_diagrams
false
[ "task_categories:time-series-forecasting", "task_ids:univariate-time-series-forecasting", "annotations_creators:no-annotation", "language_creators:found", "multilinguality:monolingual", "size_categories:1K<n<10K", "source_datasets:original", "license:unknown" ]
This new dataset contains hourly kW electricity consumption time series of 370 Portuguese clients from 2011 to 2014.
405
2
eli5
false
[ "task_categories:text2text-generation", "task_ids:abstractive-qa", "task_ids:open-domain-abstractive-qa", "annotations_creators:no-annotation", "language_creators:found", "multilinguality:monolingual", "size_categories:100K<n<1M", "source_datasets:original", "language:en", "license:unknown", "arxiv:1907.09190", "arxiv:1904.04047" ]
Explain Like I'm 5 long form QA dataset
6,833
23
eli5_category
false
[ "task_categories:text2text-generation", "task_ids:abstractive-qa", "task_ids:open-domain-abstractive-qa", "annotations_creators:found", "language_creators:found", "multilinguality:monolingual", "size_categories:100K<n<1M", "source_datasets:extended|eli5", "language:en", "license:unknown" ]
The ELI5-Category dataset is a smaller but newer and categorized version of the original ELI5 dataset. After 2017, a tagging system was introduced to this subreddit so that the questions can be categorized into different topics according to their tags. Since the training and validation set is built by questions in different topics, the dataset is expected to alleviate the train/validation overlapping issue in the original ELI5 dataset.
349
2
emea
false
[ "task_categories:translation", "annotations_creators:found", "language_creators:found", "multilinguality:multilingual", "size_categories:1M<n<10M", "source_datasets:original", "language:bg", "language:cs", "language:da", "language:de", "language:el", "language:en", "language:es", "language:et", "language:fi", "language:fr", "language:hu", "language:it", "language:lt", "language:lv", "language:mt", "language:nl", "language:pl", "language:pt", "language:ro", "language:sk", "language:sl", "language:sv", "license:unknown" ]
This is a parallel corpus made out of PDF documents from the European Medicines Agency. All files are automatically converted from PDF to plain text using pdftotext with the command line arguments -layout -nopgbrk -eol unix. There are some known problems with tables and multi-column layouts - some of them are fixed in the current version. source: http://www.emea.europa.eu/ 22 languages, 231 bitexts total number of files: 41,957 total number of tokens: 311.65M total number of sentence fragments: 26.51M
795
0
emo
false
[ "task_categories:text-classification", "task_ids:sentiment-classification", "annotations_creators:expert-generated", "language_creators:crowdsourced", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:original", "language:en", "license:unknown" ]
In this dataset, given a textual dialogue i.e. an utterance along with two previous turns of context, the goal was to infer the underlying emotion of the utterance by choosing from four emotion classes - Happy, Sad, Angry and Others.
2,302
1
dair-ai/emotion
false
[ "task_categories:text-classification", "task_ids:multi-class-classification", "annotations_creators:machine-generated", "language_creators:machine-generated", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:original", "language:en", "license:other", "emotion-classification" ]
Emotion is a dataset of English Twitter messages with six basic emotions: anger, fear, joy, love, sadness, and surprise. For more detailed information please refer to the paper.
2,587
79
emotone_ar
false
[ "task_categories:text-classification", "task_ids:sentiment-classification", "annotations_creators:found", "language_creators:found", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:original", "language:ar", "license:unknown" ]
Dataset of 10065 tweets in Arabic for Emotion detection in Arabic text
272
5
empathetic_dialogues
false
[ "task_categories:conversational", "task_categories:question-answering", "task_ids:dialogue-generation", "task_ids:open-domain-qa", "annotations_creators:crowdsourced", "language_creators:crowdsourced", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:original", "language:en", "license:cc-by-nc-4.0", "arxiv:1811.00207" ]
PyTorch original implementation of Towards Empathetic Open-domain Conversation Models: a New Benchmark and Dataset
904
22
enriched_web_nlg
false
[ "task_categories:tabular-to-text", "task_ids:rdf-to-text", "annotations_creators:found", "language_creators:crowdsourced", "multilinguality:monolingual", "size_categories:1K<n<10K", "source_datasets:extended|other-web-nlg", "language:de", "language:en", "license:cc-by-sa-4.0" ]
WebNLG is a valuable resource and benchmark for the Natural Language Generation (NLG) community. However, as other NLG benchmarks, it only consists of a collection of parallel raw representations and their corresponding textual realizations. This work aimed to provide intermediate representations of the data for the development and evaluation of popular tasks in the NLG pipeline architecture (Reiter and Dale, 2000), such as Discourse Ordering, Lexicalization, Aggregation and Referring Expression Generation.
1,383
1
eraser_multi_rc
false
[ "task_categories:multiple-choice", "task_ids:multiple-choice-qa", "annotations_creators:crowdsourced", "language_creators:found", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:original", "language:en", "license:other" ]
Eraser Multi RC is a dataset for queries over multi-line passages, along with answers and a rationalte. Each example in this dataset has the following 5 parts 1. A Mutli-line Passage 2. A Query about the passage 3. An Answer to the query 4. A Classification as to whether the answer is right or wrong 5. An Explanation justifying the classification
310
0
esnli
false
[ "language:en" ]
The e-SNLI dataset extends the Stanford Natural Language Inference Dataset to include human-annotated natural language explanations of the entailment relations.
2,951
6
eth_py150_open
false
[ "task_categories:other", "annotations_creators:no-annotation", "language_creators:machine-generated", "multilinguality:monolingual", "size_categories:100K<n<1M", "source_datasets:original", "language:en", "license:apache-2.0", "contextual-embeddings" ]
A redistributable subset of the ETH Py150 corpus, introduced in the ICML 2020 paper 'Learning and Evaluating Contextual Embedding of Source Code'
273
0
ethos
false
[ "task_categories:text-classification", "task_ids:multi-label-classification", "task_ids:sentiment-classification", "annotations_creators:crowdsourced", "annotations_creators:expert-generated", "language_creators:found", "language_creators:other", "multilinguality:monolingual", "size_categories:n<1K", "source_datasets:original", "language:en", "license:agpl-3.0", "Hate Speech Detection", "arxiv:2006.08328" ]
ETHOS: onlinE haTe speecH detectiOn dataSet. This repository contains a dataset for hate speech detection on social media platforms, called Ethos. There are two variations of the dataset: Ethos_Dataset_Binary: contains 998 comments in the dataset alongside with a label about hate speech presence or absence. 565 of them do not contain hate speech, while the rest of them, 433, contain. Ethos_Dataset_Multi_Label: which contains 8 labels for the 433 comments with hate speech content. These labels are violence (if it incites (1) or not (0) violence), directed_vs_general (if it is directed to a person (1) or a group (0)), and 6 labels about the category of hate speech like, gender, race, national_origin, disability, religion and sexual_orientation.
3,200
7
eu_regulatory_ir
false
[ "task_categories:text-retrieval", "task_ids:document-retrieval", "annotations_creators:found", "language_creators:found", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:original", "language:en", "license:cc-by-nc-sa-4.0", "document-to-document-retrieval", "arxiv:2101.10726" ]
EURegIR: Regulatory Compliance IR (EU/UK)
400
1
eurlex
false
[ "task_categories:text-classification", "task_ids:multi-label-classification", "annotations_creators:found", "language_creators:found", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:original", "language:en", "license:cc-by-sa-4.0", "legal-topic-classification" ]
EURLEX57K contains 57k legislative documents in English from EUR-Lex portal, annotated with EUROVOC concepts.
655
3