Datasets:
Eval metadata batch 3: Reddit, Rotten Tomatoes, SemEval 2010, Sentiment 140, SMS Spam, Snips, SQuAD, SQuAD v2, Timit ASR (#4337)
Browse files* Eval metadata batch 3: Quora, Reddit, Rotten Tomatoes, SemEval 2010, Sentiment 140, SMS Spam, Snips, SQuAD, SQuAD v2, Timit ASR
* Update datasets/quora/README.md
Co-authored-by: Quentin Lhoest <[email protected]>
* Update README.md
removing ROUGE args
* Update datasets/rotten_tomatoes/README.md
Co-authored-by: lewtun <[email protected]>
* Update datasets/rotten_tomatoes/README.md
Co-authored-by: lewtun <[email protected]>
* Update datasets/squad/README.md
Co-authored-by: lewtun <[email protected]>
* Update datasets/squad_v2/README.md
Co-authored-by: lewtun <[email protected]>
* Update datasets/squad/README.md
Co-authored-by: lewtun <[email protected]>
* Update datasets/squad_v2/README.md
Co-authored-by: lewtun <[email protected]>
* Update datasets/squad_v2/README.md
Co-authored-by: lewtun <[email protected]>
* Update README.md
removing eval for quora
Co-authored-by: sashavor <[email protected]>
Co-authored-by: Quentin Lhoest <[email protected]>
Co-authored-by: lewtun <[email protected]>
Commit from https://github.com/huggingface/datasets/commit/8ccf58b77343f323ba6654250f88b69699a57b8e
@@ -19,6 +19,18 @@ task_categories:
|
|
19 |
- summarization
|
20 |
task_ids:
|
21 |
- summarization-other-reddit-posts-summarization
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
22 |
---
|
23 |
|
24 |
# Dataset Card for Reddit Webis-TLDR-17
|
@@ -49,7 +61,7 @@ task_ids:
|
|
49 |
|
50 |
## Dataset Description
|
51 |
|
52 |
-
- **Homepage:** [https://webis.de/data/webis-tldr-17.html](https://webis.de/data/webis-tldr-17.html)
|
53 |
- **Repository:** [https://github.com/webis-de/webis-tldr-17-corpus](https://github.com/webis-de/webis-tldr-17-corpus)
|
54 |
- **Paper:** [https://aclanthology.org/W17-4508]
|
55 |
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
@@ -81,7 +93,7 @@ Known ROUGE scores achieved for the Webis-TLDR-17:
|
|
81 |
|
82 |
### Languages
|
83 |
|
84 |
-
English
|
85 |
|
86 |
## Dataset Structure
|
87 |
|
@@ -176,7 +188,7 @@ This dataset has been created to serve as a source of large-scale summarization
|
|
176 |
|
177 |
Reddit users write TL;DRs with various intentions, such as providing a “true” summary, asking questions or for help, or forming judgments and conclusions. As noted in the paper introducing the dataset, although the first kind of TL;DR posts are most important for training summarization models, yet, the latter allow for various alternative summarization-related tasks.
|
178 |
|
179 |
-
Although filtering was performed abusive language maybe still be present.
|
180 |
|
181 |
## Additional Information
|
182 |
|
|
|
19 |
- summarization
|
20 |
task_ids:
|
21 |
- summarization-other-reddit-posts-summarization
|
22 |
+
train-eval-index:
|
23 |
+
- config: default
|
24 |
+
task: summarization
|
25 |
+
task_id: summarization
|
26 |
+
splits:
|
27 |
+
train_split: train
|
28 |
+
col_mapping:
|
29 |
+
content: text
|
30 |
+
summary: target
|
31 |
+
metrics:
|
32 |
+
- type: rouge
|
33 |
+
name: Rouge
|
34 |
---
|
35 |
|
36 |
# Dataset Card for Reddit Webis-TLDR-17
|
|
|
61 |
|
62 |
## Dataset Description
|
63 |
|
64 |
+
- **Homepage:** [https://webis.de/data/webis-tldr-17.html](https://webis.de/data/webis-tldr-17.html)
|
65 |
- **Repository:** [https://github.com/webis-de/webis-tldr-17-corpus](https://github.com/webis-de/webis-tldr-17-corpus)
|
66 |
- **Paper:** [https://aclanthology.org/W17-4508]
|
67 |
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
|
|
93 |
|
94 |
### Languages
|
95 |
|
96 |
+
English
|
97 |
|
98 |
## Dataset Structure
|
99 |
|
|
|
188 |
|
189 |
Reddit users write TL;DRs with various intentions, such as providing a “true” summary, asking questions or for help, or forming judgments and conclusions. As noted in the paper introducing the dataset, although the first kind of TL;DR posts are most important for training summarization models, yet, the latter allow for various alternative summarization-related tasks.
|
190 |
|
191 |
+
Although filtering was performed abusive language maybe still be present.
|
192 |
|
193 |
## Additional Information
|
194 |
|