id
stringlengths
2
115
private
bool
1 class
tags
sequence
description
stringlengths
0
5.93k
downloads
int64
0
1.14M
likes
int64
0
1.79k
PanoEvJ/GenAI-sample
false
[]
null
0
0
abidlabs/abc-love
false
[]
null
0
0
Seyfelislem/cv_11_arabic_test_noisy_II
false
[]
null
0
0
abidlabs/abc-love2
false
[]
null
0
0
datablations/mup2
false
[]
null
0
0
Krzysko1/komandiero_bombardiero
false
[ "license:cc-by-nc-4.0" ]
null
0
0
amishshah/slay
false
[]
null
0
0
amishshah/imbalanced_0
false
[]
null
0
0
amishshah/imbalanced_1
false
[]
null
0
0
amishshah/imbalanced_2
false
[]
null
0
0
amishshah/imbalanced_3
false
[]
null
0
0
Dampish/Proccessed-GPT-NEO
false
[ "license:cc-by-nc-4.0" ]
null
0
0
amishshah/imbalanced_4
false
[]
null
0
0
ashwinR/ChatgptExplanation
false
[ "license:mit" ]
null
0
0
amishshah/imbalanced_5
false
[]
null
0
0
amishshah/imbalanced_6
false
[]
null
0
0
amishshah/imbalanced_7
false
[]
null
0
0
amishshah/imbalanced_8
false
[]
null
0
0
amishshah/imbalanced_9
false
[]
null
0
0
KyonBS/fudatsukiKyoukoIA-dataset
false
[]
null
0
0
DevAibest/alpaca_json_data
false
[ "license:afl-3.0" ]
null
0
0
vmalperovich/QC
false
[ "task_categories:text-classification", "size_categories:1K<n<10K", "language:en", "license:mit" ]
This data collection contains all the data used in our learning question classification experiments(see [1]), which has question class definitions, the training and testing question sets, examples of preprocessing the questions, feature definition scripts and examples of semantically related word features. This work has been done by Xin Li and Dan Roth and supported by [2].
0
0
norabelrose/truthful_qa_mc
false
[ "license:apache-2.0" ]
null
0
0
norabelrose/truthful_qa
false
[ "license:apache-2.0" ]
TruthfulQA is a benchmark to measure whether a language model is truthful in generating answers to questions. The benchmark comprises 817 questions that span 38 categories, including health, law, finance and politics. Questions are crafted so that some humans would answer falsely due to a false belief or misconception. To perform well, models must avoid generating false answers learned from imitating human texts.
0
0
jimzhiwei/amazon_product
false
[ "license:openrail" ]
null
0
0
iamketan25/roleplay-instructions-dataset
false
[]
null
0
0
AlekseyKorshuk/dummy-conversation-with-system
false
[]
null
0
0
henri28/Hague180Conventionptbr-fr
false
[]
null
0
0
tejasbale02/JohnWickCollection
false
[]
null
0
0
shawnwork/ttmnist
false
[]
null
0
0
shawnwork/ttmnist1
false
[]
null
0
0
Esgbdf/1
false
[]
null
0
0
KyonBS/kunoTsubIA-dataset
false
[]
null
0
0
KmAnu/CT1
false
[]
null
0
0
shawnwork/test_final
false
[]
null
0
0