Dataset Name Test Accuracy
glue/mrpc 0.856
glue/qqp 0.876
hlgd 0.898
paws/labeled_final 0.952
paws/labeled_swap 0.968
medical_questions_pairs 0.8562
parade 0.732
apt 0.824
@article{sileo2023tasksource,
  title={tasksource: A Dataset Harmonization Framework for Streamlined NLP Multi-Task Learning and Evaluation},
  author={Sileo, Damien},
  journal={arXiv preprint arXiv:2301.05948},
  year={2023}
}

(Accepted at LREC-COLING 2024)

Downloads last month
6
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Datasets used to train sileod/deberta-v3-base-tasksource-paraphrase