Search is not available for this dataset
pipeline_tag
stringclasses
48 values
library_name
stringclasses
205 values
text
stringlengths
0
18.3M
metadata
stringlengths
2
1.07B
id
stringlengths
5
122
last_modified
null
tags
sequencelengths
1
1.84k
sha
null
created_at
stringlengths
25
25
text-classification
transformers
<!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # tmp_znj9o4r This model was trained from scratch on an unknown dataset. It achieves the following results on the evaluation set: ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: None - training_precision: float32 ### Training results ### Framework versions - Transformers 4.16.2 - TensorFlow 2.8.0 - Datasets 1.18.3 - Tokenizers 0.11.0
{"tags": ["generated_from_keras_callback"], "model-index": [{"name": "tmp_znj9o4r", "results": []}]}
AWTStress/stress_classifier
null
[ "transformers", "tf", "distilbert", "text-classification", "generated_from_keras_callback", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
text-classification
transformers
<!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # stress_score This model was trained from scratch on an unknown dataset. It achieves the following results on the evaluation set: ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: None - training_precision: float32 ### Training results ### Framework versions - Transformers 4.16.2 - TensorFlow 2.8.0 - Datasets 1.18.3 - Tokenizers 0.11.0
{"tags": ["generated_from_keras_callback"], "model-index": [{"name": "stress_score", "results": []}]}
AWTStress/stress_score
null
[ "transformers", "tf", "distilbert", "text-classification", "generated_from_keras_callback", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
AZTEC/Arcane
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
Aakansha/hateSpeechClassification
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
Aakansha/hs
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
Aarav/MeanMadCrazy_HarryPotterBot
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
AaravMonkey/modelRepo
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
Aarbor/xlm-roberta-base-finetuned-marc-en
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
automatic-speech-recognition
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-base-timit-demo-colab This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co./facebook/wav2vec2-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.4812 - Wer: 0.3557 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 32 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 30 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 3.4668 | 4.0 | 500 | 1.3753 | 0.9895 | | 0.6126 | 8.0 | 1000 | 0.4809 | 0.4350 | | 0.2281 | 12.0 | 1500 | 0.4407 | 0.4033 | | 0.1355 | 16.0 | 2000 | 0.4590 | 0.3765 | | 0.0923 | 20.0 | 2500 | 0.4754 | 0.3707 | | 0.0654 | 24.0 | 3000 | 0.4719 | 0.3557 | | 0.0489 | 28.0 | 3500 | 0.4812 | 0.3557 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.10.0+cu111 - Datasets 1.13.3 - Tokenizers 0.10.3
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "model-index": [{"name": "wav2vec2-base-timit-demo-colab", "results": []}]}
Pinwheel/wav2vec2-base-timit-demo-colab
null
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
automatic-speech-recognition
transformers
{}
Pinwheel/wav2vec2-large-xls-r-1b-hi-v2
null
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
automatic-speech-recognition
transformers
{}
Pinwheel/wav2vec2-large-xls-r-1b-hi
null
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
automatic-speech-recognition
transformers
{}
Pinwheel/wav2vec2-large-xls-r-1b-hindi
null
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
automatic-speech-recognition
transformers
{}
Pinwheel/wav2vec2-large-xls-r-300m-50-hi
null
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
automatic-speech-recognition
transformers
{}
Pinwheel/wav2vec2-large-xls-r-300m-hi-v2
null
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
Pinwheel/wav2vec2-large-xls-r-300m-hi-v3
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
automatic-speech-recognition
transformers
{}
Pinwheel/wav2vec2-large-xls-r-300m-hi
null
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
automatic-speech-recognition
transformers
{}
Pinwheel/wav2vec2-large-xls-r-300m-tr-colab
null
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
automatic-speech-recognition
transformers
{}
Pinwheel/wav2vec2-large-xlsr-53-hi
null
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
null
keras
{}
Ab0/autoencoder-keras-mnist-demo
null
[ "keras", "region:us" ]
null
2022-03-02T23:29:04+00:00
image-classification
null
#FashionMNIST PyTorch Quick Start
{"tags": ["image-classification", "pytorch", "huggingpics", "some_thing"], "metrics": ["accuracy"], "private": false}
Ab0/foo-model
null
[ "pytorch", "image-classification", "huggingpics", "some_thing", "model-index", "region:us" ]
null
2022-03-02T23:29:04+00:00
null
keras
{}
Ab0/keras-dummy-functional-demo
null
[ "keras", "region:us" ]
null
2022-03-02T23:29:04+00:00
null
keras
{}
Ab0/keras-dummy-model-mixin-demo
null
[ "keras", "region:us" ]
null
2022-03-02T23:29:04+00:00
null
keras
{}
Ab0/keras-dummy-sequential-demo
null
[ "keras", "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
Ab2021/bookst5
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
Abab/Test_Albert
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
AbdelrahmanZayed/my-awesome-model
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
AbderrahimRezki/DialoGPT-small-harry
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
text-generation
transformers
{}
AbderrahimRezki/HarryPotterBot
null
[ "transformers", "pytorch", "gpt2", "text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
text-classification
transformers
# BERT Models Fine-tuned on Algerian Dialect Sentiment Analysis These are different BERT models (BERT Arabic models are initialized from [AraBERT](https://huggingface.co./aubmindlab/bert-large-arabertv02)) fine-tuned on the [Algerian Dialect Sentiment Analysis](https://huggingface.co./datasets/Abdou/dz-sentiment-yt-comments) dataset. The dataset contains 50,016 comments from YouTube videos in Algerian dialect. The models are evaluated on the testing set: | Model Version | No. of Parameters | Training Time | F1-Score | Accuracy | | ------------------- | ----------------- | -------------- | -------- | -------- | | LSTM | ~4 M | 3 min | 0.7399 | 0.7445 | | Bi-LSTM | ~4.3 M | 6 min 35 s | 0.7380 | 0.7437 | | [BERT Base](https://huggingface.co./bert-base-uncased) | ~109.5 M | 33 min 20 s | 0.6979 | 0.7500 | | [BERT Large](https://huggingface.co./bert-large-uncased) | ~335.1 M | 1 h 50 min | 0.6976 | 0.7484 | | [BERT Arabic Mini](https://huggingface.co./Abdou/arabert-mini-algerian) | ~11.6 M | 2 min 40 s | 0.7057 | 0.7527 | | [BERT Arabic Medium](https://huggingface.co./Abdou/arabert-medium-algerian) | ~42.1 M | 11 min 25 s | 0.7521 | 0.7860 | | [BERT Arabic Base](https://huggingface.co./Abdou/arabert-base-algerian) | ~110.6 M | 34 min 19 s | 0.7688 | 0.8002 | | **[BERT Arabic Large](https://huggingface.co./Abdou/arabert-large-algerian)** | **~336.7 M** | **1 h 53 min** | **0.7838** | **0.8174** | # Citation If you find our work useful, please cite it as follows: ```bibtex @article{2023, title={Sentiment Analysis on Algerian Dialect with Transformers}, author={Zakaria Benmounah and Abdennour Boulesnane and Abdeladim Fadheli and Mustapha Khial}, journal={Applied Sciences}, volume={13}, number={20}, pages={11157}, year={2023}, month={Oct}, publisher={MDPI AG}, DOI={10.3390/app132011157}, ISSN={2076-3417}, url={http://dx.doi.org/10.3390/app132011157} } ```
{"language": ["ar"], "license": "mit", "library_name": "transformers", "datasets": ["Abdou/dz-sentiment-yt-comments"], "metrics": ["f1", "accuracy"]}
Abdou/arabert-base-algerian
null
[ "transformers", "pytorch", "bert", "text-classification", "ar", "dataset:Abdou/dz-sentiment-yt-comments", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
text-classification
transformers
# BERT Models Fine-tuned on Algerian Dialect Sentiment Analysis These are different BERT models (BERT Arabic models are initialized from [AraBERT](https://huggingface.co./aubmindlab/bert-large-arabertv02)) fine-tuned on the [Algerian Dialect Sentiment Analysis](https://huggingface.co./datasets/Abdou/dz-sentiment-yt-comments) dataset. The dataset contains 50,016 comments from YouTube videos in Algerian dialect. The models are evaluated on the testing set: | Model Version | No. of Parameters | Training Time | F1-Score | Accuracy | | ------------------- | ----------------- | -------------- | -------- | -------- | | LSTM | ~4 M | 3 min | 0.7399 | 0.7445 | | Bi-LSTM | ~4.3 M | 6 min 35 s | 0.7380 | 0.7437 | | [BERT Base](https://huggingface.co./bert-base-uncased) | ~109.5 M | 33 min 20 s | 0.6979 | 0.7500 | | [BERT Large](https://huggingface.co./bert-large-uncased) | ~335.1 M | 1 h 50 min | 0.6976 | 0.7484 | | [BERT Arabic Mini](https://huggingface.co./Abdou/arabert-mini-algerian) | ~11.6 M | 2 min 40 s | 0.7057 | 0.7527 | | [BERT Arabic Medium](https://huggingface.co./Abdou/arabert-medium-algerian) | ~42.1 M | 11 min 25 s | 0.7521 | 0.7860 | | [BERT Arabic Base](https://huggingface.co./Abdou/arabert-base-algerian) | ~110.6 M | 34 min 19 s | 0.7688 | 0.8002 | | **[BERT Arabic Large](https://huggingface.co./Abdou/arabert-large-algerian)** | **~336.7 M** | **1 h 53 min** | **0.7838** | **0.8174** | # Citation If you find our work useful, please cite it as follows: ```bibtex @article{2023, title={Sentiment Analysis on Algerian Dialect with Transformers}, author={Zakaria Benmounah and Abdennour Boulesnane and Abdeladim Fadheli and Mustapha Khial}, journal={Applied Sciences}, volume={13}, number={20}, pages={11157}, year={2023}, month={Oct}, publisher={MDPI AG}, DOI={10.3390/app132011157}, ISSN={2076-3417}, url={http://dx.doi.org/10.3390/app132011157} } ```
{"language": ["ar"], "license": "mit", "library_name": "transformers", "datasets": ["Abdou/dz-sentiment-yt-comments"], "metrics": ["f1", "accuracy"]}
Abdou/arabert-large-algerian
null
[ "transformers", "pytorch", "bert", "text-classification", "ar", "dataset:Abdou/dz-sentiment-yt-comments", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
text-classification
transformers
# BERT Models Fine-tuned on Algerian Dialect Sentiment Analysis These are different BERT models (BERT Arabic models are initialized from [AraBERT](https://huggingface.co./aubmindlab/bert-large-arabertv02)) fine-tuned on the [Algerian Dialect Sentiment Analysis](https://huggingface.co./datasets/Abdou/dz-sentiment-yt-comments) dataset. The dataset contains 50,016 comments from YouTube videos in Algerian dialect. The models are evaluated on the testing set: | Model Version | No. of Parameters | Training Time | F1-Score | Accuracy | | ------------------- | ----------------- | -------------- | -------- | -------- | | LSTM | ~4 M | 3 min | 0.7399 | 0.7445 | | Bi-LSTM | ~4.3 M | 6 min 35 s | 0.7380 | 0.7437 | | [BERT Base](https://huggingface.co./bert-base-uncased) | ~109.5 M | 33 min 20 s | 0.6979 | 0.7500 | | [BERT Large](https://huggingface.co./bert-large-uncased) | ~335.1 M | 1 h 50 min | 0.6976 | 0.7484 | | [BERT Arabic Mini](https://huggingface.co./Abdou/arabert-mini-algerian) | ~11.6 M | 2 min 40 s | 0.7057 | 0.7527 | | [BERT Arabic Medium](https://huggingface.co./Abdou/arabert-medium-algerian) | ~42.1 M | 11 min 25 s | 0.7521 | 0.7860 | | [BERT Arabic Base](https://huggingface.co./Abdou/arabert-base-algerian) | ~110.6 M | 34 min 19 s | 0.7688 | 0.8002 | | **[BERT Arabic Large](https://huggingface.co./Abdou/arabert-large-algerian)** | **~336.7 M** | **1 h 53 min** | **0.7838** | **0.8174** | # Citation If you find our work useful, please cite it as follows: ```bibtex @article{2023, title={Sentiment Analysis on Algerian Dialect with Transformers}, author={Zakaria Benmounah and Abdennour Boulesnane and Abdeladim Fadheli and Mustapha Khial}, journal={Applied Sciences}, volume={13}, number={20}, pages={11157}, year={2023}, month={Oct}, publisher={MDPI AG}, DOI={10.3390/app132011157}, ISSN={2076-3417}, url={http://dx.doi.org/10.3390/app132011157} } ```
{"language": ["ar"], "license": "mit", "library_name": "transformers", "datasets": ["Abdou/dz-sentiment-yt-comments"], "metrics": ["f1", "accuracy"]}
Abdou/arabert-medium-algerian
null
[ "transformers", "pytorch", "bert", "text-classification", "ar", "dataset:Abdou/dz-sentiment-yt-comments", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
text-classification
transformers
# BERT Models Fine-tuned on Algerian Dialect Sentiment Analysis These are different BERT models (BERT Arabic models are initialized from [AraBERT](https://huggingface.co./aubmindlab/bert-large-arabertv02)) fine-tuned on the [Algerian Dialect Sentiment Analysis](https://huggingface.co./datasets/Abdou/dz-sentiment-yt-comments) dataset. The dataset contains 50,016 comments from YouTube videos in Algerian dialect. The models are evaluated on the testing set: | Model Version | No. of Parameters | Training Time | F1-Score | Accuracy | | ------------------- | ----------------- | -------------- | -------- | -------- | | LSTM | ~4 M | 3 min | 0.7399 | 0.7445 | | Bi-LSTM | ~4.3 M | 6 min 35 s | 0.7380 | 0.7437 | | [BERT Base](https://huggingface.co./bert-base-uncased) | ~109.5 M | 33 min 20 s | 0.6979 | 0.7500 | | [BERT Large](https://huggingface.co./bert-large-uncased) | ~335.1 M | 1 h 50 min | 0.6976 | 0.7484 | | [BERT Arabic Mini](https://huggingface.co./Abdou/arabert-mini-algerian) | ~11.6 M | 2 min 40 s | 0.7057 | 0.7527 | | [BERT Arabic Medium](https://huggingface.co./Abdou/arabert-medium-algerian) | ~42.1 M | 11 min 25 s | 0.7521 | 0.7860 | | [BERT Arabic Base](https://huggingface.co./Abdou/arabert-base-algerian) | ~110.6 M | 34 min 19 s | 0.7688 | 0.8002 | | **[BERT Arabic Large](https://huggingface.co./Abdou/arabert-large-algerian)** | **~336.7 M** | **1 h 53 min** | **0.7838** | **0.8174** | # Citation If you find our work useful, please cite it as follows: ```bibtex @article{2023, title={Sentiment Analysis on Algerian Dialect with Transformers}, author={Zakaria Benmounah and Abdennour Boulesnane and Abdeladim Fadheli and Mustapha Khial}, journal={Applied Sciences}, volume={13}, number={20}, pages={11157}, year={2023}, month={Oct}, publisher={MDPI AG}, DOI={10.3390/app132011157}, ISSN={2076-3417}, url={http://dx.doi.org/10.3390/app132011157} } ```
{"language": ["ar"], "license": "mit", "library_name": "transformers", "datasets": ["Abdou/dz-sentiment-yt-comments"], "metrics": ["f1", "accuracy"]}
Abdou/arabert-mini-algerian
null
[ "transformers", "pytorch", "bert", "text-classification", "ar", "dataset:Abdou/dz-sentiment-yt-comments", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
Abdullaziz/model1
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
AbdulmalikAdeyemo/wav2vec2-large-xls-r-300m-hausa
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
Model details available [here](https://github.com/awasthiabhijeet/PIE)
{}
AbhijeetA/PIE
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
Abhilash/BERTBasePyTorch
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
text-generation
transformers
#HarryPotter DialoGPT Model
{"tags": ["conversational"]}
AbhinavSaiTheGreat/DialoGPT-small-harrypotter
null
[ "transformers", "pytorch", "gpt2", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
token-classification
transformers
{}
Abhishek4/Cuad_Finetune_roberta
null
[ "transformers", "pytorch", "roberta", "token-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
Abi9x/DiabloGPT-large-Axel
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
token-classification
transformers
{}
AbidHasan95/movieHunt2
null
[ "transformers", "pytorch", "distilbert", "token-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
AbidineVall/my-new-shiny-tokenizer
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
text-classification
transformers
## Petrained Model BERT: base model (cased) BERT base model (cased) is a pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in this [paper](https://arxiv.org/abs/1810.04805) and first released in this [repository](https://github.com/google-research/bert). This model is case-sensitive: it makes a difference between english and English. ## Pretained Model Description BERT is an auto-encoder transformer model pretrained on a large corpus of English data (English Wikipedia + Books Corpus) in a self-supervised fashion. This means the targets are computed from the inputs themselves, and humans are not needed to label the data. It was pretrained with two objectives: - Masked language modeling (MLM) - Next sentence prediction (NSP) ## Fine-tuned Model Description: BERT fine-tuned Cola The pretrained model could be fine-tuned on other NLP tasks. The BERT model has been fine-tuned on a cola dataset from the GLUE BENCHAMRK, which is an academic benchmark that aims to measure the performance of ML models. Cola is one of the 11 datasets in this GLUE BENCHMARK.  By fine-tuning BERT on cola dataset, the model is now able to classify a given setence gramatically and semantically as acceptable or not acceptable ## How to use ? ###### Directly with a pipeline for a text-classification NLP task ```python from transformers import pipeline cola = pipeline('text-classification', model='Abirate/bert_fine_tuned_cola') cola("Tunisia is a beautiful country") [{'label': 'acceptable', 'score': 0.989352285861969}] ``` ###### Breaking down all the steps (Tokenization, Modeling, Postprocessing) ```python from transformers import AutoTokenizer, TFAutoModelForSequenceClassification import tensorflow as tf import numpy as np tokenizer = AutoTokenizer.from_pretrained('Abirate/bert_fine_tuned_cola') model = TFAutoModelForSequenceClassification.from_pretrained("Abirate/bert_fine_tuned_cola") text = "Tunisia is a beautiful country." encoded_input = tokenizer(text, return_tensors='tf') #The logits output = model(encoded_input) #Postprocessing probas_output = tf.math.softmax(tf.squeeze(output['logits']), axis = -1) class_preds = np.argmax(probas_output, axis = -1) #Predicting the class acceptable or not acceptable model.config.id2label[class_preds] #Result 'acceptable' ```
{}
Abirate/bert_fine_tuned_cola
null
[ "transformers", "tf", "bert", "text-classification", "arxiv:1810.04805", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
Abirate/code_net_new_tokenizer_from_WPiece_bert_algorithm
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
text-classification
transformers
{}
Abirate/code_net_similarity_model_sub23_fbert
null
[ "transformers", "tf", "bert", "text-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
null
transformers
{}
Abirate/gpt_3_finetuned_multi_x_science
null
[ "transformers", "pytorch", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
Abobus/Fu
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
Abolior/audiobot
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
Abozoroov/Me
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
AbyV/test
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
text-generation
transformers
# jeff's 100% authorized brain scan
{"tags": ["conversational"]}
AccurateIsaiah/DialoGPT-small-jefftastic
null
[ "transformers", "pytorch", "gpt2", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
text-generation
transformers
# Mozark's Brain Uploaded to Hugging Face
{"tags": ["conversational"]}
AccurateIsaiah/DialoGPT-small-mozark
null
[ "transformers", "pytorch", "gpt2", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
text-generation
transformers
# Mozark's Brain Uploaded to Hugging Face but v2
{"tags": ["conversational"]}
AccurateIsaiah/DialoGPT-small-mozarkv2
null
[ "transformers", "pytorch", "gpt2", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
text-generation
transformers
# Un Filtered brain upload of sinclair
{"tags": ["conversational"]}
AccurateIsaiah/DialoGPT-small-sinclair
null
[ "transformers", "pytorch", "gpt2", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-emotion This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co./distilbert-base-uncased) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 0.2128 - Accuracy: 0.928 - F1: 0.9280 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.8151 | 1.0 | 250 | 0.3043 | 0.907 | 0.9035 | | 0.24 | 2.0 | 500 | 0.2128 | 0.928 | 0.9280 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.10.0+cu111 - Datasets 1.16.1 - Tokenizers 0.10.3
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["emotion"], "metrics": ["accuracy", "f1"], "model-index": [{"name": "distilbert-base-uncased-finetuned-emotion", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "emotion", "type": "emotion", "args": "default"}, "metrics": [{"type": "accuracy", "value": 0.928, "name": "Accuracy"}, {"type": "f1", "value": 0.9280065074208208, "name": "F1"}]}]}]}
ActivationAI/distilbert-base-uncased-finetuned-emotion
null
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "dataset:emotion", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
AdWeeb/HTI_mbert
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
Adalid1985/Adalidarcane
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
text-classification
adapter-transformers
# Adapter `AdapterHub/bert-base-uncased-pf-anli_r3` for bert-base-uncased An [adapter](https://adapterhub.ml) for the `bert-base-uncased` model that was trained on the [anli](https://huggingface.co./datasets/anli/) dataset and includes a prediction head for classification. This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library. ## Usage First, install `adapter-transformers`: ``` pip install -U adapter-transformers ``` _Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_ Now, the adapter can be loaded and activated like this: ```python from transformers import AutoModelWithHeads model = AutoModelWithHeads.from_pretrained("bert-base-uncased") adapter_name = model.load_adapter("AdapterHub/bert-base-uncased-pf-anli_r3", source="hf") model.active_adapters = adapter_name ``` ## Architecture & Training The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer. In particular, training configurations for all tasks can be found [here](https://github.com/adapter-hub/efficient-task-transfer/tree/master/run_configs). ## Evaluation results Refer to [the paper](https://arxiv.org/pdf/2104.08247) for more information on results. ## Citation If you use this adapter, please cite our paper ["What to Pre-Train on? Efficient Intermediate Task Selection"](https://arxiv.org/pdf/2104.08247): ```bibtex @inproceedings{poth-etal-2021-pre, title = "{W}hat to Pre-Train on? {E}fficient Intermediate Task Selection", author = {Poth, Clifton and Pfeiffer, Jonas and R{"u}ckl{'e}, Andreas and Gurevych, Iryna}, booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2021", address = "Online and Punta Cana, Dominican Republic", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2021.emnlp-main.827", pages = "10585--10605", } ```
{"language": ["en"], "tags": ["text-classification", "bert", "adapter-transformers"], "datasets": ["anli"]}
AdapterHub/bert-base-uncased-pf-anli_r3
null
[ "adapter-transformers", "bert", "text-classification", "en", "dataset:anli", "arxiv:2104.08247", "region:us" ]
null
2022-03-02T23:29:04+00:00
null
adapter-transformers
# Adapter `AdapterHub/bert-base-uncased-pf-art` for bert-base-uncased An [adapter](https://adapterhub.ml) for the `bert-base-uncased` model that was trained on the [art](https://huggingface.co./datasets/art/) dataset and includes a prediction head for multiple choice. This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library. ## Usage First, install `adapter-transformers`: ``` pip install -U adapter-transformers ``` _Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_ Now, the adapter can be loaded and activated like this: ```python from transformers import AutoModelWithHeads model = AutoModelWithHeads.from_pretrained("bert-base-uncased") adapter_name = model.load_adapter("AdapterHub/bert-base-uncased-pf-art", source="hf") model.active_adapters = adapter_name ``` ## Architecture & Training The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer. In particular, training configurations for all tasks can be found [here](https://github.com/adapter-hub/efficient-task-transfer/tree/master/run_configs). ## Evaluation results Refer to [the paper](https://arxiv.org/pdf/2104.08247) for more information on results. ## Citation If you use this adapter, please cite our paper ["What to Pre-Train on? Efficient Intermediate Task Selection"](https://arxiv.org/pdf/2104.08247): ```bibtex @inproceedings{poth-etal-2021-what-to-pre-train-on, title={What to Pre-Train on? Efficient Intermediate Task Selection}, author={Clifton Poth and Jonas Pfeiffer and Andreas Rücklé and Iryna Gurevych}, booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing (EMNLP)", month = nov, year = "2021", address = "Online", publisher = "Association for Computational Linguistics", url = "https://arxiv.org/abs/2104.08247", pages = "to appear", } ```
{"language": ["en"], "tags": ["bert", "adapter-transformers"], "datasets": ["art"]}
AdapterHub/bert-base-uncased-pf-art
null
[ "adapter-transformers", "bert", "en", "dataset:art", "arxiv:2104.08247", "region:us" ]
null
2022-03-02T23:29:04+00:00
text-classification
adapter-transformers
# Adapter `AdapterHub/bert-base-uncased-pf-boolq` for bert-base-uncased An [adapter](https://adapterhub.ml) for the `bert-base-uncased` model that was trained on the [qa/boolq](https://adapterhub.ml/explore/qa/boolq/) dataset and includes a prediction head for classification. This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library. ## Usage First, install `adapter-transformers`: ``` pip install -U adapter-transformers ``` _Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_ Now, the adapter can be loaded and activated like this: ```python from transformers import AutoModelWithHeads model = AutoModelWithHeads.from_pretrained("bert-base-uncased") adapter_name = model.load_adapter("AdapterHub/bert-base-uncased-pf-boolq", source="hf") model.active_adapters = adapter_name ``` ## Architecture & Training The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer. In particular, training configurations for all tasks can be found [here](https://github.com/adapter-hub/efficient-task-transfer/tree/master/run_configs). ## Evaluation results Refer to [the paper](https://arxiv.org/pdf/2104.08247) for more information on results. ## Citation If you use this adapter, please cite our paper ["What to Pre-Train on? Efficient Intermediate Task Selection"](https://arxiv.org/pdf/2104.08247): ```bibtex @inproceedings{poth-etal-2021-pre, title = "{W}hat to Pre-Train on? {E}fficient Intermediate Task Selection", author = {Poth, Clifton and Pfeiffer, Jonas and R{"u}ckl{'e}, Andreas and Gurevych, Iryna}, booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2021", address = "Online and Punta Cana, Dominican Republic", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2021.emnlp-main.827", pages = "10585--10605", } ```
{"language": ["en"], "tags": ["text-classification", "bert", "adapterhub:qa/boolq", "adapter-transformers"], "datasets": ["boolq"]}
AdapterHub/bert-base-uncased-pf-boolq
null
[ "adapter-transformers", "bert", "text-classification", "adapterhub:qa/boolq", "en", "dataset:boolq", "arxiv:2104.08247", "region:us" ]
null
2022-03-02T23:29:04+00:00
text-classification
adapter-transformers
# Adapter `AdapterHub/bert-base-uncased-pf-cola` for bert-base-uncased An [adapter](https://adapterhub.ml) for the `bert-base-uncased` model that was trained on the [lingaccept/cola](https://adapterhub.ml/explore/lingaccept/cola/) dataset and includes a prediction head for classification. This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library. ## Usage First, install `adapter-transformers`: ``` pip install -U adapter-transformers ``` _Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_ Now, the adapter can be loaded and activated like this: ```python from transformers import AutoModelWithHeads model = AutoModelWithHeads.from_pretrained("bert-base-uncased") adapter_name = model.load_adapter("AdapterHub/bert-base-uncased-pf-cola", source="hf") model.active_adapters = adapter_name ``` ## Architecture & Training The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer. In particular, training configurations for all tasks can be found [here](https://github.com/adapter-hub/efficient-task-transfer/tree/master/run_configs). ## Evaluation results Refer to [the paper](https://arxiv.org/pdf/2104.08247) for more information on results. ## Citation If you use this adapter, please cite our paper ["What to Pre-Train on? Efficient Intermediate Task Selection"](https://arxiv.org/pdf/2104.08247): ```bibtex @inproceedings{poth-etal-2021-pre, title = "{W}hat to Pre-Train on? {E}fficient Intermediate Task Selection", author = {Poth, Clifton and Pfeiffer, Jonas and R{"u}ckl{'e}, Andreas and Gurevych, Iryna}, booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2021", address = "Online and Punta Cana, Dominican Republic", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2021.emnlp-main.827", pages = "10585--10605", } ```
{"language": ["en"], "tags": ["text-classification", "bert", "adapterhub:lingaccept/cola", "adapter-transformers"]}
AdapterHub/bert-base-uncased-pf-cola
null
[ "adapter-transformers", "bert", "text-classification", "adapterhub:lingaccept/cola", "en", "arxiv:2104.08247", "region:us" ]
null
2022-03-02T23:29:04+00:00
null
adapter-transformers
# Adapter `AdapterHub/bert-base-uncased-pf-commonsense_qa` for bert-base-uncased An [adapter](https://adapterhub.ml) for the `bert-base-uncased` model that was trained on the [comsense/csqa](https://adapterhub.ml/explore/comsense/csqa/) dataset and includes a prediction head for multiple choice. This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library. ## Usage First, install `adapter-transformers`: ``` pip install -U adapter-transformers ``` _Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_ Now, the adapter can be loaded and activated like this: ```python from transformers import AutoModelWithHeads model = AutoModelWithHeads.from_pretrained("bert-base-uncased") adapter_name = model.load_adapter("AdapterHub/bert-base-uncased-pf-commonsense_qa", source="hf") model.active_adapters = adapter_name ``` ## Architecture & Training The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer. In particular, training configurations for all tasks can be found [here](https://github.com/adapter-hub/efficient-task-transfer/tree/master/run_configs). ## Evaluation results Refer to [the paper](https://arxiv.org/pdf/2104.08247) for more information on results. ## Citation If you use this adapter, please cite our paper ["What to Pre-Train on? Efficient Intermediate Task Selection"](https://arxiv.org/pdf/2104.08247): ```bibtex @inproceedings{poth-etal-2021-what-to-pre-train-on, title={What to Pre-Train on? Efficient Intermediate Task Selection}, author={Clifton Poth and Jonas Pfeiffer and Andreas Rücklé and Iryna Gurevych}, booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing (EMNLP)", month = nov, year = "2021", address = "Online", publisher = "Association for Computational Linguistics", url = "https://arxiv.org/abs/2104.08247", pages = "to appear", } ```
{"language": ["en"], "tags": ["bert", "adapterhub:comsense/csqa", "adapter-transformers"], "datasets": ["commonsense_qa"]}
AdapterHub/bert-base-uncased-pf-commonsense_qa
null
[ "adapter-transformers", "bert", "adapterhub:comsense/csqa", "en", "dataset:commonsense_qa", "arxiv:2104.08247", "region:us" ]
null
2022-03-02T23:29:04+00:00
question-answering
adapter-transformers
# Adapter `AdapterHub/bert-base-uncased-pf-comqa` for bert-base-uncased An [adapter](https://adapterhub.ml) for the `bert-base-uncased` model that was trained on the [com_qa](https://huggingface.co./datasets/com_qa/) dataset and includes a prediction head for question answering. This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library. ## Usage First, install `adapter-transformers`: ``` pip install -U adapter-transformers ``` _Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_ Now, the adapter can be loaded and activated like this: ```python from transformers import AutoModelWithHeads model = AutoModelWithHeads.from_pretrained("bert-base-uncased") adapter_name = model.load_adapter("AdapterHub/bert-base-uncased-pf-comqa", source="hf") model.active_adapters = adapter_name ``` ## Architecture & Training The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer. In particular, training configurations for all tasks can be found [here](https://github.com/adapter-hub/efficient-task-transfer/tree/master/run_configs). ## Evaluation results Refer to [the paper](https://arxiv.org/pdf/2104.08247) for more information on results. ## Citation If you use this adapter, please cite our paper ["What to Pre-Train on? Efficient Intermediate Task Selection"](https://arxiv.org/pdf/2104.08247): ```bibtex @inproceedings{poth-etal-2021-pre, title = "{W}hat to Pre-Train on? {E}fficient Intermediate Task Selection", author = {Poth, Clifton and Pfeiffer, Jonas and R{"u}ckl{'e}, Andreas and Gurevych, Iryna}, booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2021", address = "Online and Punta Cana, Dominican Republic", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2021.emnlp-main.827", pages = "10585--10605", } ```
{"language": ["en"], "tags": ["question-answering", "bert", "adapter-transformers"], "datasets": ["com_qa"]}
AdapterHub/bert-base-uncased-pf-comqa
null
[ "adapter-transformers", "bert", "question-answering", "en", "dataset:com_qa", "arxiv:2104.08247", "region:us" ]
null
2022-03-02T23:29:04+00:00
token-classification
adapter-transformers
# Adapter `AdapterHub/bert-base-uncased-pf-conll2000` for bert-base-uncased An [adapter](https://adapterhub.ml) for the `bert-base-uncased` model that was trained on the [chunk/conll2000](https://adapterhub.ml/explore/chunk/conll2000/) dataset and includes a prediction head for tagging. This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library. ## Usage First, install `adapter-transformers`: ``` pip install -U adapter-transformers ``` _Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_ Now, the adapter can be loaded and activated like this: ```python from transformers import AutoModelWithHeads model = AutoModelWithHeads.from_pretrained("bert-base-uncased") adapter_name = model.load_adapter("AdapterHub/bert-base-uncased-pf-conll2000", source="hf") model.active_adapters = adapter_name ``` ## Architecture & Training The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer. In particular, training configurations for all tasks can be found [here](https://github.com/adapter-hub/efficient-task-transfer/tree/master/run_configs). ## Evaluation results Refer to [the paper](https://arxiv.org/pdf/2104.08247) for more information on results. ## Citation If you use this adapter, please cite our paper ["What to Pre-Train on? Efficient Intermediate Task Selection"](https://arxiv.org/pdf/2104.08247): ```bibtex @inproceedings{poth-etal-2021-pre, title = "{W}hat to Pre-Train on? {E}fficient Intermediate Task Selection", author = {Poth, Clifton and Pfeiffer, Jonas and R{"u}ckl{'e}, Andreas and Gurevych, Iryna}, booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2021", address = "Online and Punta Cana, Dominican Republic", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2021.emnlp-main.827", pages = "10585--10605", } ```
{"language": ["en"], "tags": ["token-classification", "bert", "adapterhub:chunk/conll2000", "adapter-transformers"], "datasets": ["conll2000"]}
AdapterHub/bert-base-uncased-pf-conll2000
null
[ "adapter-transformers", "bert", "token-classification", "adapterhub:chunk/conll2000", "en", "dataset:conll2000", "arxiv:2104.08247", "region:us" ]
null
2022-03-02T23:29:04+00:00
token-classification
adapter-transformers
# Adapter `AdapterHub/bert-base-uncased-pf-conll2003` for bert-base-uncased An [adapter](https://adapterhub.ml) for the `bert-base-uncased` model that was trained on the [ner/conll2003](https://adapterhub.ml/explore/ner/conll2003/) dataset and includes a prediction head for tagging. This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library. ## Usage First, install `adapter-transformers`: ``` pip install -U adapter-transformers ``` _Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_ Now, the adapter can be loaded and activated like this: ```python from transformers import AutoModelWithHeads model = AutoModelWithHeads.from_pretrained("bert-base-uncased") adapter_name = model.load_adapter("AdapterHub/bert-base-uncased-pf-conll2003", source="hf") model.active_adapters = adapter_name ``` ## Architecture & Training The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer. In particular, training configurations for all tasks can be found [here](https://github.com/adapter-hub/efficient-task-transfer/tree/master/run_configs). ## Evaluation results Refer to [the paper](https://arxiv.org/pdf/2104.08247) for more information on results. ## Citation If you use this adapter, please cite our paper ["What to Pre-Train on? Efficient Intermediate Task Selection"](https://arxiv.org/pdf/2104.08247): ```bibtex @inproceedings{poth-etal-2021-pre, title = "{W}hat to Pre-Train on? {E}fficient Intermediate Task Selection", author = {Poth, Clifton and Pfeiffer, Jonas and R{"u}ckl{'e}, Andreas and Gurevych, Iryna}, booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2021", address = "Online and Punta Cana, Dominican Republic", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2021.emnlp-main.827", pages = "10585--10605", } ```
{"language": ["en"], "tags": ["token-classification", "bert", "adapterhub:ner/conll2003", "adapter-transformers"], "datasets": ["conll2003"]}
AdapterHub/bert-base-uncased-pf-conll2003
null
[ "adapter-transformers", "bert", "token-classification", "adapterhub:ner/conll2003", "en", "dataset:conll2003", "arxiv:2104.08247", "region:us" ]
null
2022-03-02T23:29:04+00:00
token-classification
adapter-transformers
# Adapter `AdapterHub/bert-base-uncased-pf-conll2003_pos` for bert-base-uncased An [adapter](https://adapterhub.ml) for the `bert-base-uncased` model that was trained on the [pos/conll2003](https://adapterhub.ml/explore/pos/conll2003/) dataset and includes a prediction head for tagging. This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library. ## Usage First, install `adapter-transformers`: ``` pip install -U adapter-transformers ``` _Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_ Now, the adapter can be loaded and activated like this: ```python from transformers import AutoModelWithHeads model = AutoModelWithHeads.from_pretrained("bert-base-uncased") adapter_name = model.load_adapter("AdapterHub/bert-base-uncased-pf-conll2003_pos", source="hf") model.active_adapters = adapter_name ``` ## Architecture & Training The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer. In particular, training configurations for all tasks can be found [here](https://github.com/adapter-hub/efficient-task-transfer/tree/master/run_configs). ## Evaluation results Refer to [the paper](https://arxiv.org/pdf/2104.08247) for more information on results. ## Citation If you use this adapter, please cite our paper ["What to Pre-Train on? Efficient Intermediate Task Selection"](https://arxiv.org/pdf/2104.08247): ```bibtex @inproceedings{poth-etal-2021-pre, title = "{W}hat to Pre-Train on? {E}fficient Intermediate Task Selection", author = {Poth, Clifton and Pfeiffer, Jonas and R{"u}ckl{'e}, Andreas and Gurevych, Iryna}, booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2021", address = "Online and Punta Cana, Dominican Republic", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2021.emnlp-main.827", pages = "10585--10605", } ```
{"language": ["en"], "tags": ["token-classification", "bert", "adapterhub:pos/conll2003", "adapter-transformers"], "datasets": ["conll2003"]}
AdapterHub/bert-base-uncased-pf-conll2003_pos
null
[ "adapter-transformers", "bert", "token-classification", "adapterhub:pos/conll2003", "en", "dataset:conll2003", "arxiv:2104.08247", "region:us" ]
null
2022-03-02T23:29:04+00:00
null
adapter-transformers
# Adapter `AdapterHub/bert-base-uncased-pf-copa` for bert-base-uncased An [adapter](https://adapterhub.ml) for the `bert-base-uncased` model that was trained on the [comsense/copa](https://adapterhub.ml/explore/comsense/copa/) dataset and includes a prediction head for multiple choice. This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library. ## Usage First, install `adapter-transformers`: ``` pip install -U adapter-transformers ``` _Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_ Now, the adapter can be loaded and activated like this: ```python from transformers import AutoModelWithHeads model = AutoModelWithHeads.from_pretrained("bert-base-uncased") adapter_name = model.load_adapter("AdapterHub/bert-base-uncased-pf-copa", source="hf") model.active_adapters = adapter_name ``` ## Architecture & Training The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer. In particular, training configurations for all tasks can be found [here](https://github.com/adapter-hub/efficient-task-transfer/tree/master/run_configs). ## Evaluation results Refer to [the paper](https://arxiv.org/pdf/2104.08247) for more information on results. ## Citation If you use this adapter, please cite our paper ["What to Pre-Train on? Efficient Intermediate Task Selection"](https://arxiv.org/pdf/2104.08247): ```bibtex @inproceedings{poth-etal-2021-what-to-pre-train-on, title={What to Pre-Train on? Efficient Intermediate Task Selection}, author={Clifton Poth and Jonas Pfeiffer and Andreas Rücklé and Iryna Gurevych}, booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing (EMNLP)", month = nov, year = "2021", address = "Online", publisher = "Association for Computational Linguistics", url = "https://arxiv.org/abs/2104.08247", pages = "to appear", } ```
{"language": ["en"], "tags": ["bert", "adapterhub:comsense/copa", "adapter-transformers"]}
AdapterHub/bert-base-uncased-pf-copa
null
[ "adapter-transformers", "bert", "adapterhub:comsense/copa", "en", "arxiv:2104.08247", "region:us" ]
null
2022-03-02T23:29:04+00:00
null
adapter-transformers
# Adapter `AdapterHub/bert-base-uncased-pf-cosmos_qa` for bert-base-uncased An [adapter](https://adapterhub.ml) for the `bert-base-uncased` model that was trained on the [comsense/cosmosqa](https://adapterhub.ml/explore/comsense/cosmosqa/) dataset and includes a prediction head for multiple choice. This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library. ## Usage First, install `adapter-transformers`: ``` pip install -U adapter-transformers ``` _Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_ Now, the adapter can be loaded and activated like this: ```python from transformers import AutoModelWithHeads model = AutoModelWithHeads.from_pretrained("bert-base-uncased") adapter_name = model.load_adapter("AdapterHub/bert-base-uncased-pf-cosmos_qa", source="hf") model.active_adapters = adapter_name ``` ## Architecture & Training The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer. In particular, training configurations for all tasks can be found [here](https://github.com/adapter-hub/efficient-task-transfer/tree/master/run_configs). ## Evaluation results Refer to [the paper](https://arxiv.org/pdf/2104.08247) for more information on results. ## Citation If you use this adapter, please cite our paper ["What to Pre-Train on? Efficient Intermediate Task Selection"](https://arxiv.org/pdf/2104.08247): ```bibtex @inproceedings{poth-etal-2021-what-to-pre-train-on, title={What to Pre-Train on? Efficient Intermediate Task Selection}, author={Clifton Poth and Jonas Pfeiffer and Andreas Rücklé and Iryna Gurevych}, booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing (EMNLP)", month = nov, year = "2021", address = "Online", publisher = "Association for Computational Linguistics", url = "https://arxiv.org/abs/2104.08247", pages = "to appear", } ```
{"language": ["en"], "tags": ["bert", "adapterhub:comsense/cosmosqa", "adapter-transformers"], "datasets": ["cosmos_qa"]}
AdapterHub/bert-base-uncased-pf-cosmos_qa
null
[ "adapter-transformers", "bert", "adapterhub:comsense/cosmosqa", "en", "dataset:cosmos_qa", "arxiv:2104.08247", "region:us" ]
null
2022-03-02T23:29:04+00:00
question-answering
adapter-transformers
# Adapter `AdapterHub/bert-base-uncased-pf-cq` for bert-base-uncased An [adapter](https://adapterhub.ml) for the `bert-base-uncased` model that was trained on the [qa/cq](https://adapterhub.ml/explore/qa/cq/) dataset and includes a prediction head for question answering. This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library. ## Usage First, install `adapter-transformers`: ``` pip install -U adapter-transformers ``` _Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_ Now, the adapter can be loaded and activated like this: ```python from transformers import AutoModelWithHeads model = AutoModelWithHeads.from_pretrained("bert-base-uncased") adapter_name = model.load_adapter("AdapterHub/bert-base-uncased-pf-cq", source="hf") model.active_adapters = adapter_name ``` ## Architecture & Training The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer. In particular, training configurations for all tasks can be found [here](https://github.com/adapter-hub/efficient-task-transfer/tree/master/run_configs). ## Evaluation results Refer to [the paper](https://arxiv.org/pdf/2104.08247) for more information on results. ## Citation If you use this adapter, please cite our paper ["What to Pre-Train on? Efficient Intermediate Task Selection"](https://arxiv.org/pdf/2104.08247): ```bibtex @inproceedings{poth-etal-2021-pre, title = "{W}hat to Pre-Train on? {E}fficient Intermediate Task Selection", author = {Poth, Clifton and Pfeiffer, Jonas and R{"u}ckl{'e}, Andreas and Gurevych, Iryna}, booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2021", address = "Online and Punta Cana, Dominican Republic", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2021.emnlp-main.827", pages = "10585--10605", } ```
{"language": ["en"], "tags": ["question-answering", "bert", "adapterhub:qa/cq", "adapter-transformers"]}
AdapterHub/bert-base-uncased-pf-cq
null
[ "adapter-transformers", "bert", "question-answering", "adapterhub:qa/cq", "en", "arxiv:2104.08247", "region:us" ]
null
2022-03-02T23:29:04+00:00
question-answering
adapter-transformers
# Adapter `AdapterHub/bert-base-uncased-pf-drop` for bert-base-uncased An [adapter](https://adapterhub.ml) for the `bert-base-uncased` model that was trained on the [drop](https://huggingface.co./datasets/drop/) dataset and includes a prediction head for question answering. This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library. ## Usage First, install `adapter-transformers`: ``` pip install -U adapter-transformers ``` _Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_ Now, the adapter can be loaded and activated like this: ```python from transformers import AutoModelWithHeads model = AutoModelWithHeads.from_pretrained("bert-base-uncased") adapter_name = model.load_adapter("AdapterHub/bert-base-uncased-pf-drop", source="hf") model.active_adapters = adapter_name ``` ## Architecture & Training The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer. In particular, training configurations for all tasks can be found [here](https://github.com/adapter-hub/efficient-task-transfer/tree/master/run_configs). ## Evaluation results Refer to [the paper](https://arxiv.org/pdf/2104.08247) for more information on results. ## Citation If you use this adapter, please cite our paper ["What to Pre-Train on? Efficient Intermediate Task Selection"](https://arxiv.org/pdf/2104.08247): ```bibtex @inproceedings{poth-etal-2021-pre, title = "{W}hat to Pre-Train on? {E}fficient Intermediate Task Selection", author = {Poth, Clifton and Pfeiffer, Jonas and R{"u}ckl{'e}, Andreas and Gurevych, Iryna}, booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2021", address = "Online and Punta Cana, Dominican Republic", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2021.emnlp-main.827", pages = "10585--10605", } ```
{"language": ["en"], "tags": ["question-answering", "bert", "adapter-transformers"], "datasets": ["drop"]}
AdapterHub/bert-base-uncased-pf-drop
null
[ "adapter-transformers", "bert", "question-answering", "en", "dataset:drop", "arxiv:2104.08247", "region:us" ]
null
2022-03-02T23:29:04+00:00
question-answering
adapter-transformers
# Adapter `AdapterHub/bert-base-uncased-pf-duorc_p` for bert-base-uncased An [adapter](https://adapterhub.ml) for the `bert-base-uncased` model that was trained on the [duorc](https://huggingface.co./datasets/duorc/) dataset and includes a prediction head for question answering. This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library. ## Usage First, install `adapter-transformers`: ``` pip install -U adapter-transformers ``` _Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_ Now, the adapter can be loaded and activated like this: ```python from transformers import AutoModelWithHeads model = AutoModelWithHeads.from_pretrained("bert-base-uncased") adapter_name = model.load_adapter("AdapterHub/bert-base-uncased-pf-duorc_p", source="hf") model.active_adapters = adapter_name ``` ## Architecture & Training The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer. In particular, training configurations for all tasks can be found [here](https://github.com/adapter-hub/efficient-task-transfer/tree/master/run_configs). ## Evaluation results Refer to [the paper](https://arxiv.org/pdf/2104.08247) for more information on results. ## Citation If you use this adapter, please cite our paper ["What to Pre-Train on? Efficient Intermediate Task Selection"](https://arxiv.org/pdf/2104.08247): ```bibtex @inproceedings{poth-etal-2021-pre, title = "{W}hat to Pre-Train on? {E}fficient Intermediate Task Selection", author = {Poth, Clifton and Pfeiffer, Jonas and R{"u}ckl{'e}, Andreas and Gurevych, Iryna}, booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2021", address = "Online and Punta Cana, Dominican Republic", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2021.emnlp-main.827", pages = "10585--10605", } ```
{"language": ["en"], "tags": ["question-answering", "bert", "adapter-transformers"], "datasets": ["duorc"]}
AdapterHub/bert-base-uncased-pf-duorc_p
null
[ "adapter-transformers", "bert", "question-answering", "en", "dataset:duorc", "arxiv:2104.08247", "region:us" ]
null
2022-03-02T23:29:04+00:00
question-answering
adapter-transformers
# Adapter `AdapterHub/bert-base-uncased-pf-duorc_s` for bert-base-uncased An [adapter](https://adapterhub.ml) for the `bert-base-uncased` model that was trained on the [duorc](https://huggingface.co./datasets/duorc/) dataset and includes a prediction head for question answering. This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library. ## Usage First, install `adapter-transformers`: ``` pip install -U adapter-transformers ``` _Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_ Now, the adapter can be loaded and activated like this: ```python from transformers import AutoModelWithHeads model = AutoModelWithHeads.from_pretrained("bert-base-uncased") adapter_name = model.load_adapter("AdapterHub/bert-base-uncased-pf-duorc_s", source="hf") model.active_adapters = adapter_name ``` ## Architecture & Training The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer. In particular, training configurations for all tasks can be found [here](https://github.com/adapter-hub/efficient-task-transfer/tree/master/run_configs). ## Evaluation results Refer to [the paper](https://arxiv.org/pdf/2104.08247) for more information on results. ## Citation If you use this adapter, please cite our paper ["What to Pre-Train on? Efficient Intermediate Task Selection"](https://arxiv.org/pdf/2104.08247): ```bibtex @inproceedings{poth-etal-2021-pre, title = "{W}hat to Pre-Train on? {E}fficient Intermediate Task Selection", author = {Poth, Clifton and Pfeiffer, Jonas and R{"u}ckl{'e}, Andreas and Gurevych, Iryna}, booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2021", address = "Online and Punta Cana, Dominican Republic", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2021.emnlp-main.827", pages = "10585--10605", } ```
{"language": ["en"], "tags": ["question-answering", "bert", "adapter-transformers"], "datasets": ["duorc"]}
AdapterHub/bert-base-uncased-pf-duorc_s
null
[ "adapter-transformers", "bert", "question-answering", "en", "dataset:duorc", "arxiv:2104.08247", "region:us" ]
null
2022-03-02T23:29:04+00:00
text-classification
adapter-transformers
# Adapter `AdapterHub/bert-base-uncased-pf-emo` for bert-base-uncased An [adapter](https://adapterhub.ml) for the `bert-base-uncased` model that was trained on the [emo](https://huggingface.co./datasets/emo/) dataset and includes a prediction head for classification. This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library. ## Usage First, install `adapter-transformers`: ``` pip install -U adapter-transformers ``` _Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_ Now, the adapter can be loaded and activated like this: ```python from transformers import AutoModelWithHeads model = AutoModelWithHeads.from_pretrained("bert-base-uncased") adapter_name = model.load_adapter("AdapterHub/bert-base-uncased-pf-emo", source="hf") model.active_adapters = adapter_name ``` ## Architecture & Training The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer. In particular, training configurations for all tasks can be found [here](https://github.com/adapter-hub/efficient-task-transfer/tree/master/run_configs). ## Evaluation results Refer to [the paper](https://arxiv.org/pdf/2104.08247) for more information on results. ## Citation If you use this adapter, please cite our paper ["What to Pre-Train on? Efficient Intermediate Task Selection"](https://arxiv.org/pdf/2104.08247): ```bibtex @inproceedings{poth-etal-2021-pre, title = "{W}hat to Pre-Train on? {E}fficient Intermediate Task Selection", author = {Poth, Clifton and Pfeiffer, Jonas and R{"u}ckl{'e}, Andreas and Gurevych, Iryna}, booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2021", address = "Online and Punta Cana, Dominican Republic", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2021.emnlp-main.827", pages = "10585--10605", } ```
{"language": ["en"], "tags": ["text-classification", "bert", "adapter-transformers"], "datasets": ["emo"]}
AdapterHub/bert-base-uncased-pf-emo
null
[ "adapter-transformers", "bert", "text-classification", "en", "dataset:emo", "arxiv:2104.08247", "region:us" ]
null
2022-03-02T23:29:04+00:00
text-classification
adapter-transformers
# Adapter `AdapterHub/bert-base-uncased-pf-emotion` for bert-base-uncased An [adapter](https://adapterhub.ml) for the `bert-base-uncased` model that was trained on the [emotion](https://huggingface.co./datasets/emotion/) dataset and includes a prediction head for classification. This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library. ## Usage First, install `adapter-transformers`: ``` pip install -U adapter-transformers ``` _Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_ Now, the adapter can be loaded and activated like this: ```python from transformers import AutoModelWithHeads model = AutoModelWithHeads.from_pretrained("bert-base-uncased") adapter_name = model.load_adapter("AdapterHub/bert-base-uncased-pf-emotion", source="hf") model.active_adapters = adapter_name ``` ## Architecture & Training The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer. In particular, training configurations for all tasks can be found [here](https://github.com/adapter-hub/efficient-task-transfer/tree/master/run_configs). ## Evaluation results Refer to [the paper](https://arxiv.org/pdf/2104.08247) for more information on results. ## Citation If you use this adapter, please cite our paper ["What to Pre-Train on? Efficient Intermediate Task Selection"](https://arxiv.org/pdf/2104.08247): ```bibtex @inproceedings{poth-etal-2021-pre, title = "{W}hat to Pre-Train on? {E}fficient Intermediate Task Selection", author = {Poth, Clifton and Pfeiffer, Jonas and R{"u}ckl{'e}, Andreas and Gurevych, Iryna}, booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2021", address = "Online and Punta Cana, Dominican Republic", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2021.emnlp-main.827", pages = "10585--10605", } ```
{"language": ["en"], "tags": ["text-classification", "bert", "adapter-transformers"], "datasets": ["emotion"]}
AdapterHub/bert-base-uncased-pf-emotion
null
[ "adapter-transformers", "bert", "text-classification", "en", "dataset:emotion", "arxiv:2104.08247", "region:us" ]
null
2022-03-02T23:29:04+00:00
token-classification
adapter-transformers
# Adapter `AdapterHub/bert-base-uncased-pf-fce_error_detection` for bert-base-uncased An [adapter](https://adapterhub.ml) for the `bert-base-uncased` model that was trained on the [ged/fce](https://adapterhub.ml/explore/ged/fce/) dataset and includes a prediction head for tagging. This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library. ## Usage First, install `adapter-transformers`: ``` pip install -U adapter-transformers ``` _Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_ Now, the adapter can be loaded and activated like this: ```python from transformers import AutoModelWithHeads model = AutoModelWithHeads.from_pretrained("bert-base-uncased") adapter_name = model.load_adapter("AdapterHub/bert-base-uncased-pf-fce_error_detection", source="hf") model.active_adapters = adapter_name ``` ## Architecture & Training The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer. In particular, training configurations for all tasks can be found [here](https://github.com/adapter-hub/efficient-task-transfer/tree/master/run_configs). ## Evaluation results Refer to [the paper](https://arxiv.org/pdf/2104.08247) for more information on results. ## Citation If you use this adapter, please cite our paper ["What to Pre-Train on? Efficient Intermediate Task Selection"](https://arxiv.org/pdf/2104.08247): ```bibtex @inproceedings{poth-etal-2021-pre, title = "{W}hat to Pre-Train on? {E}fficient Intermediate Task Selection", author = {Poth, Clifton and Pfeiffer, Jonas and R{"u}ckl{'e}, Andreas and Gurevych, Iryna}, booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2021", address = "Online and Punta Cana, Dominican Republic", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2021.emnlp-main.827", pages = "10585--10605", } ```
{"language": ["en"], "tags": ["token-classification", "bert", "adapterhub:ged/fce", "adapter-transformers"], "datasets": ["fce_error_detection"]}
AdapterHub/bert-base-uncased-pf-fce_error_detection
null
[ "adapter-transformers", "bert", "token-classification", "adapterhub:ged/fce", "en", "dataset:fce_error_detection", "arxiv:2104.08247", "region:us" ]
null
2022-03-02T23:29:04+00:00
null
adapter-transformers
# Adapter `AdapterHub/bert-base-uncased-pf-hellaswag` for bert-base-uncased An [adapter](https://adapterhub.ml) for the `bert-base-uncased` model that was trained on the [comsense/hellaswag](https://adapterhub.ml/explore/comsense/hellaswag/) dataset and includes a prediction head for multiple choice. This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library. ## Usage First, install `adapter-transformers`: ``` pip install -U adapter-transformers ``` _Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_ Now, the adapter can be loaded and activated like this: ```python from transformers import AutoModelWithHeads model = AutoModelWithHeads.from_pretrained("bert-base-uncased") adapter_name = model.load_adapter("AdapterHub/bert-base-uncased-pf-hellaswag", source="hf") model.active_adapters = adapter_name ``` ## Architecture & Training The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer. In particular, training configurations for all tasks can be found [here](https://github.com/adapter-hub/efficient-task-transfer/tree/master/run_configs). ## Evaluation results Refer to [the paper](https://arxiv.org/pdf/2104.08247) for more information on results. ## Citation If you use this adapter, please cite our paper ["What to Pre-Train on? Efficient Intermediate Task Selection"](https://arxiv.org/pdf/2104.08247): ```bibtex @inproceedings{poth-etal-2021-what-to-pre-train-on, title={What to Pre-Train on? Efficient Intermediate Task Selection}, author={Clifton Poth and Jonas Pfeiffer and Andreas Rücklé and Iryna Gurevych}, booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing (EMNLP)", month = nov, year = "2021", address = "Online", publisher = "Association for Computational Linguistics", url = "https://arxiv.org/abs/2104.08247", pages = "to appear", } ```
{"language": ["en"], "tags": ["bert", "adapterhub:comsense/hellaswag", "adapter-transformers"], "datasets": ["hellaswag"]}
AdapterHub/bert-base-uncased-pf-hellaswag
null
[ "adapter-transformers", "bert", "adapterhub:comsense/hellaswag", "en", "dataset:hellaswag", "arxiv:2104.08247", "region:us" ]
null
2022-03-02T23:29:04+00:00
question-answering
adapter-transformers
# Adapter `AdapterHub/bert-base-uncased-pf-hotpotqa` for bert-base-uncased An [adapter](https://adapterhub.ml) for the `bert-base-uncased` model that was trained on the [hotpot_qa](https://huggingface.co./datasets/hotpot_qa/) dataset and includes a prediction head for question answering. This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library. ## Usage First, install `adapter-transformers`: ``` pip install -U adapter-transformers ``` _Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_ Now, the adapter can be loaded and activated like this: ```python from transformers import AutoModelWithHeads model = AutoModelWithHeads.from_pretrained("bert-base-uncased") adapter_name = model.load_adapter("AdapterHub/bert-base-uncased-pf-hotpotqa", source="hf") model.active_adapters = adapter_name ``` ## Architecture & Training The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer. In particular, training configurations for all tasks can be found [here](https://github.com/adapter-hub/efficient-task-transfer/tree/master/run_configs). ## Evaluation results Refer to [the paper](https://arxiv.org/pdf/2104.08247) for more information on results. ## Citation If you use this adapter, please cite our paper ["What to Pre-Train on? Efficient Intermediate Task Selection"](https://arxiv.org/pdf/2104.08247): ```bibtex @inproceedings{poth-etal-2021-pre, title = "{W}hat to Pre-Train on? {E}fficient Intermediate Task Selection", author = {Poth, Clifton and Pfeiffer, Jonas and R{"u}ckl{'e}, Andreas and Gurevych, Iryna}, booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2021", address = "Online and Punta Cana, Dominican Republic", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2021.emnlp-main.827", pages = "10585--10605", } ```
{"language": ["en"], "tags": ["question-answering", "bert", "adapter-transformers"], "datasets": ["hotpot_qa"]}
AdapterHub/bert-base-uncased-pf-hotpotqa
null
[ "adapter-transformers", "bert", "question-answering", "en", "dataset:hotpot_qa", "arxiv:2104.08247", "region:us" ]
null
2022-03-02T23:29:04+00:00
text-classification
adapter-transformers
# Adapter `AdapterHub/bert-base-uncased-pf-imdb` for bert-base-uncased An [adapter](https://adapterhub.ml) for the `bert-base-uncased` model that was trained on the [sentiment/imdb](https://adapterhub.ml/explore/sentiment/imdb/) dataset and includes a prediction head for classification. This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library. ## Usage First, install `adapter-transformers`: ``` pip install -U adapter-transformers ``` _Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_ Now, the adapter can be loaded and activated like this: ```python from transformers import AutoModelWithHeads model = AutoModelWithHeads.from_pretrained("bert-base-uncased") adapter_name = model.load_adapter("AdapterHub/bert-base-uncased-pf-imdb", source="hf") model.active_adapters = adapter_name ``` ## Architecture & Training The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer. In particular, training configurations for all tasks can be found [here](https://github.com/adapter-hub/efficient-task-transfer/tree/master/run_configs). ## Evaluation results Refer to [the paper](https://arxiv.org/pdf/2104.08247) for more information on results. ## Citation If you use this adapter, please cite our paper ["What to Pre-Train on? Efficient Intermediate Task Selection"](https://arxiv.org/pdf/2104.08247): ```bibtex @inproceedings{poth-etal-2021-pre, title = "{W}hat to Pre-Train on? {E}fficient Intermediate Task Selection", author = {Poth, Clifton and Pfeiffer, Jonas and R{"u}ckl{'e}, Andreas and Gurevych, Iryna}, booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2021", address = "Online and Punta Cana, Dominican Republic", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2021.emnlp-main.827", pages = "10585--10605", } ```
{"language": ["en"], "tags": ["text-classification", "bert", "adapterhub:sentiment/imdb", "adapter-transformers"], "datasets": ["imdb"]}
AdapterHub/bert-base-uncased-pf-imdb
null
[ "adapter-transformers", "bert", "text-classification", "adapterhub:sentiment/imdb", "en", "dataset:imdb", "arxiv:2104.08247", "region:us" ]
null
2022-03-02T23:29:04+00:00
token-classification
adapter-transformers
# Adapter `AdapterHub/bert-base-uncased-pf-mit_movie_trivia` for bert-base-uncased An [adapter](https://adapterhub.ml) for the `bert-base-uncased` model that was trained on the [ner/mit_movie_trivia](https://adapterhub.ml/explore/ner/mit_movie_trivia/) dataset and includes a prediction head for tagging. This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library. ## Usage First, install `adapter-transformers`: ``` pip install -U adapter-transformers ``` _Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_ Now, the adapter can be loaded and activated like this: ```python from transformers import AutoModelWithHeads model = AutoModelWithHeads.from_pretrained("bert-base-uncased") adapter_name = model.load_adapter("AdapterHub/bert-base-uncased-pf-mit_movie_trivia", source="hf") model.active_adapters = adapter_name ``` ## Architecture & Training The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer. In particular, training configurations for all tasks can be found [here](https://github.com/adapter-hub/efficient-task-transfer/tree/master/run_configs). ## Evaluation results Refer to [the paper](https://arxiv.org/pdf/2104.08247) for more information on results. ## Citation If you use this adapter, please cite our paper ["What to Pre-Train on? Efficient Intermediate Task Selection"](https://arxiv.org/pdf/2104.08247): ```bibtex @inproceedings{poth-etal-2021-pre, title = "{W}hat to Pre-Train on? {E}fficient Intermediate Task Selection", author = {Poth, Clifton and Pfeiffer, Jonas and R{"u}ckl{'e}, Andreas and Gurevych, Iryna}, booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2021", address = "Online and Punta Cana, Dominican Republic", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2021.emnlp-main.827", pages = "10585--10605", } ```
{"language": ["en"], "tags": ["token-classification", "bert", "adapterhub:ner/mit_movie_trivia", "adapter-transformers"]}
AdapterHub/bert-base-uncased-pf-mit_movie_trivia
null
[ "adapter-transformers", "bert", "token-classification", "adapterhub:ner/mit_movie_trivia", "en", "arxiv:2104.08247", "region:us" ]
null
2022-03-02T23:29:04+00:00
text-classification
adapter-transformers
# Adapter `AdapterHub/bert-base-uncased-pf-mnli` for bert-base-uncased An [adapter](https://adapterhub.ml) for the `bert-base-uncased` model that was trained on the [nli/multinli](https://adapterhub.ml/explore/nli/multinli/) dataset and includes a prediction head for classification. This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library. ## Usage First, install `adapter-transformers`: ``` pip install -U adapter-transformers ``` _Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_ Now, the adapter can be loaded and activated like this: ```python from transformers import AutoModelWithHeads model = AutoModelWithHeads.from_pretrained("bert-base-uncased") adapter_name = model.load_adapter("AdapterHub/bert-base-uncased-pf-mnli", source="hf") model.active_adapters = adapter_name ``` ## Architecture & Training The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer. In particular, training configurations for all tasks can be found [here](https://github.com/adapter-hub/efficient-task-transfer/tree/master/run_configs). ## Evaluation results Refer to [the paper](https://arxiv.org/pdf/2104.08247) for more information on results. ## Citation If you use this adapter, please cite our paper ["What to Pre-Train on? Efficient Intermediate Task Selection"](https://arxiv.org/pdf/2104.08247): ```bibtex @inproceedings{poth-etal-2021-pre, title = "{W}hat to Pre-Train on? {E}fficient Intermediate Task Selection", author = {Poth, Clifton and Pfeiffer, Jonas and R{"u}ckl{'e}, Andreas and Gurevych, Iryna}, booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2021", address = "Online and Punta Cana, Dominican Republic", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2021.emnlp-main.827", pages = "10585--10605", } ```
{"language": ["en"], "tags": ["text-classification", "bert", "adapterhub:nli/multinli", "adapter-transformers"], "datasets": ["multi_nli"]}
AdapterHub/bert-base-uncased-pf-mnli
null
[ "adapter-transformers", "bert", "text-classification", "adapterhub:nli/multinli", "en", "dataset:multi_nli", "arxiv:2104.08247", "region:us" ]
null
2022-03-02T23:29:04+00:00
text-classification
adapter-transformers
# Adapter `AdapterHub/bert-base-uncased-pf-mrpc` for bert-base-uncased An [adapter](https://adapterhub.ml) for the `bert-base-uncased` model that was trained on the [sts/mrpc](https://adapterhub.ml/explore/sts/mrpc/) dataset and includes a prediction head for classification. This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library. ## Usage First, install `adapter-transformers`: ``` pip install -U adapter-transformers ``` _Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_ Now, the adapter can be loaded and activated like this: ```python from transformers import AutoModelWithHeads model = AutoModelWithHeads.from_pretrained("bert-base-uncased") adapter_name = model.load_adapter("AdapterHub/bert-base-uncased-pf-mrpc", source="hf") model.active_adapters = adapter_name ``` ## Architecture & Training The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer. In particular, training configurations for all tasks can be found [here](https://github.com/adapter-hub/efficient-task-transfer/tree/master/run_configs). ## Evaluation results Refer to [the paper](https://arxiv.org/pdf/2104.08247) for more information on results. ## Citation If you use this adapter, please cite our paper ["What to Pre-Train on? Efficient Intermediate Task Selection"](https://arxiv.org/pdf/2104.08247): ```bibtex @inproceedings{poth-etal-2021-pre, title = "{W}hat to Pre-Train on? {E}fficient Intermediate Task Selection", author = {Poth, Clifton and Pfeiffer, Jonas and R{"u}ckl{'e}, Andreas and Gurevych, Iryna}, booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2021", address = "Online and Punta Cana, Dominican Republic", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2021.emnlp-main.827", pages = "10585--10605", } ```
{"language": ["en"], "tags": ["text-classification", "bert", "adapterhub:sts/mrpc", "adapter-transformers"]}
AdapterHub/bert-base-uncased-pf-mrpc
null
[ "adapter-transformers", "bert", "text-classification", "adapterhub:sts/mrpc", "en", "arxiv:2104.08247", "region:us" ]
null
2022-03-02T23:29:04+00:00
text-classification
adapter-transformers
# Adapter `AdapterHub/bert-base-uncased-pf-multirc` for bert-base-uncased An [adapter](https://adapterhub.ml) for the `bert-base-uncased` model that was trained on the [rc/multirc](https://adapterhub.ml/explore/rc/multirc/) dataset and includes a prediction head for classification. This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library. ## Usage First, install `adapter-transformers`: ``` pip install -U adapter-transformers ``` _Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_ Now, the adapter can be loaded and activated like this: ```python from transformers import AutoModelWithHeads model = AutoModelWithHeads.from_pretrained("bert-base-uncased") adapter_name = model.load_adapter("AdapterHub/bert-base-uncased-pf-multirc", source="hf") model.active_adapters = adapter_name ``` ## Architecture & Training The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer. In particular, training configurations for all tasks can be found [here](https://github.com/adapter-hub/efficient-task-transfer/tree/master/run_configs). ## Evaluation results Refer to [the paper](https://arxiv.org/pdf/2104.08247) for more information on results. ## Citation If you use this adapter, please cite our paper ["What to Pre-Train on? Efficient Intermediate Task Selection"](https://arxiv.org/pdf/2104.08247): ```bibtex @inproceedings{poth-etal-2021-pre, title = "{W}hat to Pre-Train on? {E}fficient Intermediate Task Selection", author = {Poth, Clifton and Pfeiffer, Jonas and R{"u}ckl{'e}, Andreas and Gurevych, Iryna}, booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2021", address = "Online and Punta Cana, Dominican Republic", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2021.emnlp-main.827", pages = "10585--10605", } ```
{"language": ["en"], "tags": ["text-classification", "adapterhub:rc/multirc", "bert", "adapter-transformers"]}
AdapterHub/bert-base-uncased-pf-multirc
null
[ "adapter-transformers", "bert", "text-classification", "adapterhub:rc/multirc", "en", "arxiv:2104.08247", "region:us" ]
null
2022-03-02T23:29:04+00:00
question-answering
adapter-transformers
# Adapter `AdapterHub/bert-base-uncased-pf-newsqa` for bert-base-uncased An [adapter](https://adapterhub.ml) for the `bert-base-uncased` model that was trained on the [newsqa](https://huggingface.co./datasets/newsqa/) dataset and includes a prediction head for question answering. This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library. ## Usage First, install `adapter-transformers`: ``` pip install -U adapter-transformers ``` _Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_ Now, the adapter can be loaded and activated like this: ```python from transformers import AutoModelWithHeads model = AutoModelWithHeads.from_pretrained("bert-base-uncased") adapter_name = model.load_adapter("AdapterHub/bert-base-uncased-pf-newsqa", source="hf") model.active_adapters = adapter_name ``` ## Architecture & Training The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer. In particular, training configurations for all tasks can be found [here](https://github.com/adapter-hub/efficient-task-transfer/tree/master/run_configs). ## Evaluation results Refer to [the paper](https://arxiv.org/pdf/2104.08247) for more information on results. ## Citation If you use this adapter, please cite our paper ["What to Pre-Train on? Efficient Intermediate Task Selection"](https://arxiv.org/pdf/2104.08247): ```bibtex @inproceedings{poth-etal-2021-pre, title = "{W}hat to Pre-Train on? {E}fficient Intermediate Task Selection", author = {Poth, Clifton and Pfeiffer, Jonas and R{"u}ckl{'e}, Andreas and Gurevych, Iryna}, booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2021", address = "Online and Punta Cana, Dominican Republic", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2021.emnlp-main.827", pages = "10585--10605", } ```
{"language": ["en"], "tags": ["question-answering", "bert", "adapter-transformers"], "datasets": ["newsqa"]}
AdapterHub/bert-base-uncased-pf-newsqa
null
[ "adapter-transformers", "bert", "question-answering", "en", "dataset:newsqa", "arxiv:2104.08247", "region:us" ]
null
2022-03-02T23:29:04+00:00
token-classification
adapter-transformers
# Adapter `AdapterHub/bert-base-uncased-pf-pmb_sem_tagging` for bert-base-uncased An [adapter](https://adapterhub.ml) for the `bert-base-uncased` model that was trained on the [semtag/pmb](https://adapterhub.ml/explore/semtag/pmb/) dataset and includes a prediction head for tagging. This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library. ## Usage First, install `adapter-transformers`: ``` pip install -U adapter-transformers ``` _Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_ Now, the adapter can be loaded and activated like this: ```python from transformers import AutoModelWithHeads model = AutoModelWithHeads.from_pretrained("bert-base-uncased") adapter_name = model.load_adapter("AdapterHub/bert-base-uncased-pf-pmb_sem_tagging", source="hf") model.active_adapters = adapter_name ``` ## Architecture & Training The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer. In particular, training configurations for all tasks can be found [here](https://github.com/adapter-hub/efficient-task-transfer/tree/master/run_configs). ## Evaluation results Refer to [the paper](https://arxiv.org/pdf/2104.08247) for more information on results. ## Citation If you use this adapter, please cite our paper ["What to Pre-Train on? Efficient Intermediate Task Selection"](https://arxiv.org/pdf/2104.08247): ```bibtex @inproceedings{poth-etal-2021-pre, title = "{W}hat to Pre-Train on? {E}fficient Intermediate Task Selection", author = {Poth, Clifton and Pfeiffer, Jonas and R{"u}ckl{'e}, Andreas and Gurevych, Iryna}, booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2021", address = "Online and Punta Cana, Dominican Republic", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2021.emnlp-main.827", pages = "10585--10605", } ```
{"language": ["en"], "tags": ["token-classification", "bert", "adapterhub:semtag/pmb", "adapter-transformers"]}
AdapterHub/bert-base-uncased-pf-pmb_sem_tagging
null
[ "adapter-transformers", "bert", "token-classification", "adapterhub:semtag/pmb", "en", "arxiv:2104.08247", "region:us" ]
null
2022-03-02T23:29:04+00:00
text-classification
adapter-transformers
# Adapter `AdapterHub/bert-base-uncased-pf-qnli` for bert-base-uncased An [adapter](https://adapterhub.ml) for the `bert-base-uncased` model that was trained on the [nli/qnli](https://adapterhub.ml/explore/nli/qnli/) dataset and includes a prediction head for classification. This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library. ## Usage First, install `adapter-transformers`: ``` pip install -U adapter-transformers ``` _Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_ Now, the adapter can be loaded and activated like this: ```python from transformers import AutoModelWithHeads model = AutoModelWithHeads.from_pretrained("bert-base-uncased") adapter_name = model.load_adapter("AdapterHub/bert-base-uncased-pf-qnli", source="hf") model.active_adapters = adapter_name ``` ## Architecture & Training The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer. In particular, training configurations for all tasks can be found [here](https://github.com/adapter-hub/efficient-task-transfer/tree/master/run_configs). ## Evaluation results Refer to [the paper](https://arxiv.org/pdf/2104.08247) for more information on results. ## Citation If you use this adapter, please cite our paper ["What to Pre-Train on? Efficient Intermediate Task Selection"](https://arxiv.org/pdf/2104.08247): ```bibtex @inproceedings{poth-etal-2021-pre, title = "{W}hat to Pre-Train on? {E}fficient Intermediate Task Selection", author = {Poth, Clifton and Pfeiffer, Jonas and R{"u}ckl{'e}, Andreas and Gurevych, Iryna}, booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2021", address = "Online and Punta Cana, Dominican Republic", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2021.emnlp-main.827", pages = "10585--10605", } ```
{"language": ["en"], "tags": ["text-classification", "bert", "adapterhub:nli/qnli", "adapter-transformers"]}
AdapterHub/bert-base-uncased-pf-qnli
null
[ "adapter-transformers", "bert", "text-classification", "adapterhub:nli/qnli", "en", "arxiv:2104.08247", "region:us" ]
null
2022-03-02T23:29:04+00:00
text-classification
adapter-transformers
# Adapter `AdapterHub/bert-base-uncased-pf-qqp` for bert-base-uncased An [adapter](https://adapterhub.ml) for the `bert-base-uncased` model that was trained on the [sts/qqp](https://adapterhub.ml/explore/sts/qqp/) dataset and includes a prediction head for classification. This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library. ## Usage First, install `adapter-transformers`: ``` pip install -U adapter-transformers ``` _Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_ Now, the adapter can be loaded and activated like this: ```python from transformers import AutoModelWithHeads model = AutoModelWithHeads.from_pretrained("bert-base-uncased") adapter_name = model.load_adapter("AdapterHub/bert-base-uncased-pf-qqp", source="hf") model.active_adapters = adapter_name ``` ## Architecture & Training The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer. In particular, training configurations for all tasks can be found [here](https://github.com/adapter-hub/efficient-task-transfer/tree/master/run_configs). ## Evaluation results Refer to [the paper](https://arxiv.org/pdf/2104.08247) for more information on results. ## Citation If you use this adapter, please cite our paper ["What to Pre-Train on? Efficient Intermediate Task Selection"](https://arxiv.org/pdf/2104.08247): ```bibtex @inproceedings{poth-etal-2021-pre, title = "{W}hat to Pre-Train on? {E}fficient Intermediate Task Selection", author = {Poth, Clifton and Pfeiffer, Jonas and R{"u}ckl{'e}, Andreas and Gurevych, Iryna}, booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2021", address = "Online and Punta Cana, Dominican Republic", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2021.emnlp-main.827", pages = "10585--10605", } ```
{"language": ["en"], "tags": ["text-classification", "adapter-transformers", "adapterhub:sts/qqp", "bert"]}
AdapterHub/bert-base-uncased-pf-qqp
null
[ "adapter-transformers", "bert", "text-classification", "adapterhub:sts/qqp", "en", "arxiv:2104.08247", "region:us" ]
null
2022-03-02T23:29:04+00:00
null
adapter-transformers
# Adapter `AdapterHub/bert-base-uncased-pf-quail` for bert-base-uncased An [adapter](https://adapterhub.ml) for the `bert-base-uncased` model that was trained on the [quail](https://huggingface.co./datasets/quail/) dataset and includes a prediction head for multiple choice. This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library. ## Usage First, install `adapter-transformers`: ``` pip install -U adapter-transformers ``` _Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_ Now, the adapter can be loaded and activated like this: ```python from transformers import AutoModelWithHeads model = AutoModelWithHeads.from_pretrained("bert-base-uncased") adapter_name = model.load_adapter("AdapterHub/bert-base-uncased-pf-quail", source="hf") model.active_adapters = adapter_name ``` ## Architecture & Training The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer. In particular, training configurations for all tasks can be found [here](https://github.com/adapter-hub/efficient-task-transfer/tree/master/run_configs). ## Evaluation results Refer to [the paper](https://arxiv.org/pdf/2104.08247) for more information on results. ## Citation If you use this adapter, please cite our paper ["What to Pre-Train on? Efficient Intermediate Task Selection"](https://arxiv.org/pdf/2104.08247): ```bibtex @inproceedings{poth-etal-2021-what-to-pre-train-on, title={What to Pre-Train on? Efficient Intermediate Task Selection}, author={Clifton Poth and Jonas Pfeiffer and Andreas Rücklé and Iryna Gurevych}, booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing (EMNLP)", month = nov, year = "2021", address = "Online", publisher = "Association for Computational Linguistics", url = "https://arxiv.org/abs/2104.08247", pages = "to appear", } ```
{"language": ["en"], "tags": ["bert", "adapter-transformers"], "datasets": ["quail"]}
AdapterHub/bert-base-uncased-pf-quail
null
[ "adapter-transformers", "bert", "en", "dataset:quail", "arxiv:2104.08247", "region:us" ]
null
2022-03-02T23:29:04+00:00
null
adapter-transformers
# Adapter `AdapterHub/bert-base-uncased-pf-quartz` for bert-base-uncased An [adapter](https://adapterhub.ml) for the `bert-base-uncased` model that was trained on the [quartz](https://huggingface.co./datasets/quartz/) dataset and includes a prediction head for multiple choice. This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library. ## Usage First, install `adapter-transformers`: ``` pip install -U adapter-transformers ``` _Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_ Now, the adapter can be loaded and activated like this: ```python from transformers import AutoModelWithHeads model = AutoModelWithHeads.from_pretrained("bert-base-uncased") adapter_name = model.load_adapter("AdapterHub/bert-base-uncased-pf-quartz", source="hf") model.active_adapters = adapter_name ``` ## Architecture & Training The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer. In particular, training configurations for all tasks can be found [here](https://github.com/adapter-hub/efficient-task-transfer/tree/master/run_configs). ## Evaluation results Refer to [the paper](https://arxiv.org/pdf/2104.08247) for more information on results. ## Citation If you use this adapter, please cite our paper ["What to Pre-Train on? Efficient Intermediate Task Selection"](https://arxiv.org/pdf/2104.08247): ```bibtex @inproceedings{poth-etal-2021-what-to-pre-train-on, title={What to Pre-Train on? Efficient Intermediate Task Selection}, author={Clifton Poth and Jonas Pfeiffer and Andreas Rücklé and Iryna Gurevych}, booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing (EMNLP)", month = nov, year = "2021", address = "Online", publisher = "Association for Computational Linguistics", url = "https://arxiv.org/abs/2104.08247", pages = "to appear", } ```
{"language": ["en"], "tags": ["bert", "adapter-transformers"], "datasets": ["quartz"]}
AdapterHub/bert-base-uncased-pf-quartz
null
[ "adapter-transformers", "bert", "en", "dataset:quartz", "arxiv:2104.08247", "region:us" ]
null
2022-03-02T23:29:04+00:00
question-answering
adapter-transformers
# Adapter `AdapterHub/bert-base-uncased-pf-quoref` for bert-base-uncased An [adapter](https://adapterhub.ml) for the `bert-base-uncased` model that was trained on the [quoref](https://huggingface.co./datasets/quoref/) dataset and includes a prediction head for question answering. This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library. ## Usage First, install `adapter-transformers`: ``` pip install -U adapter-transformers ``` _Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_ Now, the adapter can be loaded and activated like this: ```python from transformers import AutoModelWithHeads model = AutoModelWithHeads.from_pretrained("bert-base-uncased") adapter_name = model.load_adapter("AdapterHub/bert-base-uncased-pf-quoref", source="hf") model.active_adapters = adapter_name ``` ## Architecture & Training The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer. In particular, training configurations for all tasks can be found [here](https://github.com/adapter-hub/efficient-task-transfer/tree/master/run_configs). ## Evaluation results Refer to [the paper](https://arxiv.org/pdf/2104.08247) for more information on results. ## Citation If you use this adapter, please cite our paper ["What to Pre-Train on? Efficient Intermediate Task Selection"](https://arxiv.org/pdf/2104.08247): ```bibtex @inproceedings{poth-etal-2021-pre, title = "{W}hat to Pre-Train on? {E}fficient Intermediate Task Selection", author = {Poth, Clifton and Pfeiffer, Jonas and R{"u}ckl{'e}, Andreas and Gurevych, Iryna}, booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2021", address = "Online and Punta Cana, Dominican Republic", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2021.emnlp-main.827", pages = "10585--10605", } ```
{"language": ["en"], "tags": ["question-answering", "bert", "adapter-transformers"], "datasets": ["quoref"]}
AdapterHub/bert-base-uncased-pf-quoref
null
[ "adapter-transformers", "bert", "question-answering", "en", "dataset:quoref", "arxiv:2104.08247", "region:us" ]
null
2022-03-02T23:29:04+00:00
null
adapter-transformers
# Adapter `AdapterHub/bert-base-uncased-pf-race` for bert-base-uncased An [adapter](https://adapterhub.ml) for the `bert-base-uncased` model that was trained on the [rc/race](https://adapterhub.ml/explore/rc/race/) dataset and includes a prediction head for multiple choice. This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library. ## Usage First, install `adapter-transformers`: ``` pip install -U adapter-transformers ``` _Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_ Now, the adapter can be loaded and activated like this: ```python from transformers import AutoModelWithHeads model = AutoModelWithHeads.from_pretrained("bert-base-uncased") adapter_name = model.load_adapter("AdapterHub/bert-base-uncased-pf-race", source="hf") model.active_adapters = adapter_name ``` ## Architecture & Training The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer. In particular, training configurations for all tasks can be found [here](https://github.com/adapter-hub/efficient-task-transfer/tree/master/run_configs). ## Evaluation results Refer to [the paper](https://arxiv.org/pdf/2104.08247) for more information on results. ## Citation If you use this adapter, please cite our paper ["What to Pre-Train on? Efficient Intermediate Task Selection"](https://arxiv.org/pdf/2104.08247): ```bibtex @inproceedings{poth-etal-2021-what-to-pre-train-on, title={What to Pre-Train on? Efficient Intermediate Task Selection}, author={Clifton Poth and Jonas Pfeiffer and Andreas Rücklé and Iryna Gurevych}, booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing (EMNLP)", month = nov, year = "2021", address = "Online", publisher = "Association for Computational Linguistics", url = "https://arxiv.org/abs/2104.08247", pages = "to appear", } ```
{"language": ["en"], "tags": ["adapterhub:rc/race", "bert", "adapter-transformers"], "datasets": ["race"]}
AdapterHub/bert-base-uncased-pf-race
null
[ "adapter-transformers", "bert", "adapterhub:rc/race", "en", "dataset:race", "arxiv:2104.08247", "region:us" ]
null
2022-03-02T23:29:04+00:00
text-classification
adapter-transformers
# Adapter `AdapterHub/bert-base-uncased-pf-record` for bert-base-uncased An [adapter](https://adapterhub.ml) for the `bert-base-uncased` model that was trained on the [rc/record](https://adapterhub.ml/explore/rc/record/) dataset and includes a prediction head for classification. This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library. ## Usage First, install `adapter-transformers`: ``` pip install -U adapter-transformers ``` _Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_ Now, the adapter can be loaded and activated like this: ```python from transformers import AutoModelWithHeads model = AutoModelWithHeads.from_pretrained("bert-base-uncased") adapter_name = model.load_adapter("AdapterHub/bert-base-uncased-pf-record", source="hf") model.active_adapters = adapter_name ``` ## Architecture & Training The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer. In particular, training configurations for all tasks can be found [here](https://github.com/adapter-hub/efficient-task-transfer/tree/master/run_configs). ## Evaluation results Refer to [the paper](https://arxiv.org/pdf/2104.08247) for more information on results. ## Citation If you use this adapter, please cite our paper ["What to Pre-Train on? Efficient Intermediate Task Selection"](https://arxiv.org/pdf/2104.08247): ```bibtex @inproceedings{poth-etal-2021-pre, title = "{W}hat to Pre-Train on? {E}fficient Intermediate Task Selection", author = {Poth, Clifton and Pfeiffer, Jonas and R{"u}ckl{'e}, Andreas and Gurevych, Iryna}, booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2021", address = "Online and Punta Cana, Dominican Republic", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2021.emnlp-main.827", pages = "10585--10605", } ```
{"language": ["en"], "tags": ["text-classification", "bert", "adapterhub:rc/record", "adapter-transformers"]}
AdapterHub/bert-base-uncased-pf-record
null
[ "adapter-transformers", "bert", "text-classification", "adapterhub:rc/record", "en", "arxiv:2104.08247", "region:us" ]
null
2022-03-02T23:29:04+00:00
text-classification
adapter-transformers
# Adapter `AdapterHub/bert-base-uncased-pf-rotten_tomatoes` for bert-base-uncased An [adapter](https://adapterhub.ml) for the `bert-base-uncased` model that was trained on the [sentiment/rotten_tomatoes](https://adapterhub.ml/explore/sentiment/rotten_tomatoes/) dataset and includes a prediction head for classification. This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library. ## Usage First, install `adapter-transformers`: ``` pip install -U adapter-transformers ``` _Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_ Now, the adapter can be loaded and activated like this: ```python from transformers import AutoModelWithHeads model = AutoModelWithHeads.from_pretrained("bert-base-uncased") adapter_name = model.load_adapter("AdapterHub/bert-base-uncased-pf-rotten_tomatoes", source="hf") model.active_adapters = adapter_name ``` ## Architecture & Training The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer. In particular, training configurations for all tasks can be found [here](https://github.com/adapter-hub/efficient-task-transfer/tree/master/run_configs). ## Evaluation results Refer to [the paper](https://arxiv.org/pdf/2104.08247) for more information on results. ## Citation If you use this adapter, please cite our paper ["What to Pre-Train on? Efficient Intermediate Task Selection"](https://arxiv.org/pdf/2104.08247): ```bibtex @inproceedings{poth-etal-2021-pre, title = "{W}hat to Pre-Train on? {E}fficient Intermediate Task Selection", author = {Poth, Clifton and Pfeiffer, Jonas and R{"u}ckl{'e}, Andreas and Gurevych, Iryna}, booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2021", address = "Online and Punta Cana, Dominican Republic", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2021.emnlp-main.827", pages = "10585--10605", } ```
{"language": ["en"], "tags": ["text-classification", "bert", "adapterhub:sentiment/rotten_tomatoes", "adapter-transformers"], "datasets": ["rotten_tomatoes"]}
AdapterHub/bert-base-uncased-pf-rotten_tomatoes
null
[ "adapter-transformers", "bert", "text-classification", "adapterhub:sentiment/rotten_tomatoes", "en", "dataset:rotten_tomatoes", "arxiv:2104.08247", "region:us" ]
null
2022-03-02T23:29:04+00:00
text-classification
adapter-transformers
# Adapter `AdapterHub/bert-base-uncased-pf-rte` for bert-base-uncased An [adapter](https://adapterhub.ml) for the `bert-base-uncased` model that was trained on the [nli/rte](https://adapterhub.ml/explore/nli/rte/) dataset and includes a prediction head for classification. This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library. ## Usage First, install `adapter-transformers`: ``` pip install -U adapter-transformers ``` _Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_ Now, the adapter can be loaded and activated like this: ```python from transformers import AutoModelWithHeads model = AutoModelWithHeads.from_pretrained("bert-base-uncased") adapter_name = model.load_adapter("AdapterHub/bert-base-uncased-pf-rte", source="hf") model.active_adapters = adapter_name ``` ## Architecture & Training The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer. In particular, training configurations for all tasks can be found [here](https://github.com/adapter-hub/efficient-task-transfer/tree/master/run_configs). ## Evaluation results Refer to [the paper](https://arxiv.org/pdf/2104.08247) for more information on results. ## Citation If you use this adapter, please cite our paper ["What to Pre-Train on? Efficient Intermediate Task Selection"](https://arxiv.org/pdf/2104.08247): ```bibtex @inproceedings{poth-etal-2021-pre, title = "{W}hat to Pre-Train on? {E}fficient Intermediate Task Selection", author = {Poth, Clifton and Pfeiffer, Jonas and R{"u}ckl{'e}, Andreas and Gurevych, Iryna}, booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2021", address = "Online and Punta Cana, Dominican Republic", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2021.emnlp-main.827", pages = "10585--10605", } ```
{"language": ["en"], "tags": ["text-classification", "bert", "adapterhub:nli/rte", "adapter-transformers"]}
AdapterHub/bert-base-uncased-pf-rte
null
[ "adapter-transformers", "bert", "text-classification", "adapterhub:nli/rte", "en", "arxiv:2104.08247", "region:us" ]
null
2022-03-02T23:29:04+00:00
text-classification
adapter-transformers
# Adapter `AdapterHub/bert-base-uncased-pf-scicite` for bert-base-uncased An [adapter](https://adapterhub.ml) for the `bert-base-uncased` model that was trained on the [scicite](https://huggingface.co./datasets/scicite/) dataset and includes a prediction head for classification. This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library. ## Usage First, install `adapter-transformers`: ``` pip install -U adapter-transformers ``` _Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_ Now, the adapter can be loaded and activated like this: ```python from transformers import AutoModelWithHeads model = AutoModelWithHeads.from_pretrained("bert-base-uncased") adapter_name = model.load_adapter("AdapterHub/bert-base-uncased-pf-scicite", source="hf") model.active_adapters = adapter_name ``` ## Architecture & Training The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer. In particular, training configurations for all tasks can be found [here](https://github.com/adapter-hub/efficient-task-transfer/tree/master/run_configs). ## Evaluation results Refer to [the paper](https://arxiv.org/pdf/2104.08247) for more information on results. ## Citation If you use this adapter, please cite our paper ["What to Pre-Train on? Efficient Intermediate Task Selection"](https://arxiv.org/pdf/2104.08247): ```bibtex @inproceedings{poth-etal-2021-pre, title = "{W}hat to Pre-Train on? {E}fficient Intermediate Task Selection", author = {Poth, Clifton and Pfeiffer, Jonas and R{"u}ckl{'e}, Andreas and Gurevych, Iryna}, booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2021", address = "Online and Punta Cana, Dominican Republic", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2021.emnlp-main.827", pages = "10585--10605", } ```
{"language": ["en"], "tags": ["text-classification", "bert", "adapter-transformers"], "datasets": ["scicite"]}
AdapterHub/bert-base-uncased-pf-scicite
null
[ "adapter-transformers", "bert", "text-classification", "en", "dataset:scicite", "arxiv:2104.08247", "region:us" ]
null
2022-03-02T23:29:04+00:00
text-classification
adapter-transformers
# Adapter `AdapterHub/bert-base-uncased-pf-scitail` for bert-base-uncased An [adapter](https://adapterhub.ml) for the `bert-base-uncased` model that was trained on the [nli/scitail](https://adapterhub.ml/explore/nli/scitail/) dataset and includes a prediction head for classification. This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library. ## Usage First, install `adapter-transformers`: ``` pip install -U adapter-transformers ``` _Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_ Now, the adapter can be loaded and activated like this: ```python from transformers import AutoModelWithHeads model = AutoModelWithHeads.from_pretrained("bert-base-uncased") adapter_name = model.load_adapter("AdapterHub/bert-base-uncased-pf-scitail", source="hf") model.active_adapters = adapter_name ``` ## Architecture & Training The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer. In particular, training configurations for all tasks can be found [here](https://github.com/adapter-hub/efficient-task-transfer/tree/master/run_configs). ## Evaluation results Refer to [the paper](https://arxiv.org/pdf/2104.08247) for more information on results. ## Citation If you use this adapter, please cite our paper ["What to Pre-Train on? Efficient Intermediate Task Selection"](https://arxiv.org/pdf/2104.08247): ```bibtex @inproceedings{poth-etal-2021-pre, title = "{W}hat to Pre-Train on? {E}fficient Intermediate Task Selection", author = {Poth, Clifton and Pfeiffer, Jonas and R{"u}ckl{'e}, Andreas and Gurevych, Iryna}, booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2021", address = "Online and Punta Cana, Dominican Republic", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2021.emnlp-main.827", pages = "10585--10605", } ```
{"language": ["en"], "tags": ["text-classification", "bert", "adapterhub:nli/scitail", "adapter-transformers"], "datasets": ["scitail"]}
AdapterHub/bert-base-uncased-pf-scitail
null
[ "adapter-transformers", "bert", "text-classification", "adapterhub:nli/scitail", "en", "dataset:scitail", "arxiv:2104.08247", "region:us" ]
null
2022-03-02T23:29:04+00:00
text-classification
adapter-transformers
# Adapter `AdapterHub/bert-base-uncased-pf-sick` for bert-base-uncased An [adapter](https://adapterhub.ml) for the `bert-base-uncased` model that was trained on the [nli/sick](https://adapterhub.ml/explore/nli/sick/) dataset and includes a prediction head for classification. This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library. ## Usage First, install `adapter-transformers`: ``` pip install -U adapter-transformers ``` _Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_ Now, the adapter can be loaded and activated like this: ```python from transformers import AutoModelWithHeads model = AutoModelWithHeads.from_pretrained("bert-base-uncased") adapter_name = model.load_adapter("AdapterHub/bert-base-uncased-pf-sick", source="hf") model.active_adapters = adapter_name ``` ## Architecture & Training The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer. In particular, training configurations for all tasks can be found [here](https://github.com/adapter-hub/efficient-task-transfer/tree/master/run_configs). ## Evaluation results Refer to [the paper](https://arxiv.org/pdf/2104.08247) for more information on results. ## Citation If you use this adapter, please cite our paper ["What to Pre-Train on? Efficient Intermediate Task Selection"](https://arxiv.org/pdf/2104.08247): ```bibtex @inproceedings{poth-etal-2021-pre, title = "{W}hat to Pre-Train on? {E}fficient Intermediate Task Selection", author = {Poth, Clifton and Pfeiffer, Jonas and R{"u}ckl{'e}, Andreas and Gurevych, Iryna}, booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2021", address = "Online and Punta Cana, Dominican Republic", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2021.emnlp-main.827", pages = "10585--10605", } ```
{"language": ["en"], "tags": ["text-classification", "adapter-transformers", "bert", "adapterhub:nli/sick"], "datasets": ["sick"]}
AdapterHub/bert-base-uncased-pf-sick
null
[ "adapter-transformers", "bert", "text-classification", "adapterhub:nli/sick", "en", "dataset:sick", "arxiv:2104.08247", "region:us" ]
null
2022-03-02T23:29:04+00:00
text-classification
adapter-transformers
# Adapter `AdapterHub/bert-base-uncased-pf-snli` for bert-base-uncased An [adapter](https://adapterhub.ml) for the `bert-base-uncased` model that was trained on the [snli](https://huggingface.co./datasets/snli/) dataset and includes a prediction head for classification. This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library. ## Usage First, install `adapter-transformers`: ``` pip install -U adapter-transformers ``` _Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_ Now, the adapter can be loaded and activated like this: ```python from transformers import AutoModelWithHeads model = AutoModelWithHeads.from_pretrained("bert-base-uncased") adapter_name = model.load_adapter("AdapterHub/bert-base-uncased-pf-snli", source="hf") model.active_adapters = adapter_name ``` ## Architecture & Training The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer. In particular, training configurations for all tasks can be found [here](https://github.com/adapter-hub/efficient-task-transfer/tree/master/run_configs). ## Evaluation results Refer to [the paper](https://arxiv.org/pdf/2104.08247) for more information on results. ## Citation If you use this adapter, please cite our paper ["What to Pre-Train on? Efficient Intermediate Task Selection"](https://arxiv.org/pdf/2104.08247): ```bibtex @inproceedings{poth-etal-2021-pre, title = "{W}hat to Pre-Train on? {E}fficient Intermediate Task Selection", author = {Poth, Clifton and Pfeiffer, Jonas and R{"u}ckl{'e}, Andreas and Gurevych, Iryna}, booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2021", address = "Online and Punta Cana, Dominican Republic", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2021.emnlp-main.827", pages = "10585--10605", } ```
{"language": ["en"], "tags": ["text-classification", "bert", "adapter-transformers"], "datasets": ["snli"]}
AdapterHub/bert-base-uncased-pf-snli
null
[ "adapter-transformers", "bert", "text-classification", "en", "dataset:snli", "arxiv:2104.08247", "region:us" ]
null
2022-03-02T23:29:04+00:00
null
adapter-transformers
# Adapter `AdapterHub/bert-base-uncased-pf-social_i_qa` for bert-base-uncased An [adapter](https://adapterhub.ml) for the `bert-base-uncased` model that was trained on the [social_i_qa](https://huggingface.co./datasets/social_i_qa/) dataset and includes a prediction head for multiple choice. This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library. ## Usage First, install `adapter-transformers`: ``` pip install -U adapter-transformers ``` _Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_ Now, the adapter can be loaded and activated like this: ```python from transformers import AutoModelWithHeads model = AutoModelWithHeads.from_pretrained("bert-base-uncased") adapter_name = model.load_adapter("AdapterHub/bert-base-uncased-pf-social_i_qa", source="hf") model.active_adapters = adapter_name ``` ## Architecture & Training The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer. In particular, training configurations for all tasks can be found [here](https://github.com/adapter-hub/efficient-task-transfer/tree/master/run_configs). ## Evaluation results Refer to [the paper](https://arxiv.org/pdf/2104.08247) for more information on results. ## Citation If you use this adapter, please cite our paper ["What to Pre-Train on? Efficient Intermediate Task Selection"](https://arxiv.org/pdf/2104.08247): ```bibtex @inproceedings{poth-etal-2021-what-to-pre-train-on, title={What to Pre-Train on? Efficient Intermediate Task Selection}, author={Clifton Poth and Jonas Pfeiffer and Andreas Rücklé and Iryna Gurevych}, booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing (EMNLP)", month = nov, year = "2021", address = "Online", publisher = "Association for Computational Linguistics", url = "https://arxiv.org/abs/2104.08247", pages = "to appear", } ```
{"language": ["en"], "tags": ["bert", "adapter-transformers"], "datasets": ["social_i_qa"]}
AdapterHub/bert-base-uncased-pf-social_i_qa
null
[ "adapter-transformers", "bert", "en", "dataset:social_i_qa", "arxiv:2104.08247", "region:us" ]
null
2022-03-02T23:29:04+00:00
question-answering
adapter-transformers
# Adapter `AdapterHub/bert-base-uncased-pf-squad` for bert-base-uncased An [adapter](https://adapterhub.ml) for the `bert-base-uncased` model that was trained on the [qa/squad1](https://adapterhub.ml/explore/qa/squad1/) dataset and includes a prediction head for question answering. This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library. ## Usage First, install `adapter-transformers`: ``` pip install -U adapter-transformers ``` _Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_ Now, the adapter can be loaded and activated like this: ```python from transformers import AutoModelWithHeads model = AutoModelWithHeads.from_pretrained("bert-base-uncased") adapter_name = model.load_adapter("AdapterHub/bert-base-uncased-pf-squad", source="hf") model.active_adapters = adapter_name ``` ## Architecture & Training The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer. In particular, training configurations for all tasks can be found [here](https://github.com/adapter-hub/efficient-task-transfer/tree/master/run_configs). ## Evaluation results Refer to [the paper](https://arxiv.org/pdf/2104.08247) for more information on results. ## Citation If you use this adapter, please cite our paper ["What to Pre-Train on? Efficient Intermediate Task Selection"](https://arxiv.org/pdf/2104.08247): ```bibtex @inproceedings{poth-etal-2021-pre, title = "{W}hat to Pre-Train on? {E}fficient Intermediate Task Selection", author = {Poth, Clifton and Pfeiffer, Jonas and R{"u}ckl{'e}, Andreas and Gurevych, Iryna}, booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2021", address = "Online and Punta Cana, Dominican Republic", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2021.emnlp-main.827", pages = "10585--10605", } ```
{"language": ["en"], "tags": ["question-answering", "bert", "adapterhub:qa/squad1", "adapter-transformers"], "datasets": ["squad"]}
AdapterHub/bert-base-uncased-pf-squad
null
[ "adapter-transformers", "bert", "question-answering", "adapterhub:qa/squad1", "en", "dataset:squad", "arxiv:2104.08247", "region:us" ]
null
2022-03-02T23:29:04+00:00
question-answering
adapter-transformers
# Adapter `AdapterHub/bert-base-uncased-pf-squad_v2` for bert-base-uncased An [adapter](https://adapterhub.ml) for the `bert-base-uncased` model that was trained on the [qa/squad2](https://adapterhub.ml/explore/qa/squad2/) dataset and includes a prediction head for question answering. This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library. ## Usage First, install `adapter-transformers`: ``` pip install -U adapter-transformers ``` _Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_ Now, the adapter can be loaded and activated like this: ```python from transformers import AutoModelWithHeads model = AutoModelWithHeads.from_pretrained("bert-base-uncased") adapter_name = model.load_adapter("AdapterHub/bert-base-uncased-pf-squad_v2", source="hf") model.active_adapters = adapter_name ``` ## Architecture & Training The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer. In particular, training configurations for all tasks can be found [here](https://github.com/adapter-hub/efficient-task-transfer/tree/master/run_configs). ## Evaluation results Refer to [the paper](https://arxiv.org/pdf/2104.08247) for more information on results. ## Citation If you use this adapter, please cite our paper ["What to Pre-Train on? Efficient Intermediate Task Selection"](https://arxiv.org/pdf/2104.08247): ```bibtex @inproceedings{poth-etal-2021-pre, title = "{W}hat to Pre-Train on? {E}fficient Intermediate Task Selection", author = {Poth, Clifton and Pfeiffer, Jonas and R{"u}ckl{'e}, Andreas and Gurevych, Iryna}, booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2021", address = "Online and Punta Cana, Dominican Republic", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2021.emnlp-main.827", pages = "10585--10605", } ```
{"language": ["en"], "tags": ["question-answering", "bert", "adapterhub:qa/squad2", "adapter-transformers"], "datasets": ["squad_v2"]}
AdapterHub/bert-base-uncased-pf-squad_v2
null
[ "adapter-transformers", "bert", "question-answering", "adapterhub:qa/squad2", "en", "dataset:squad_v2", "arxiv:2104.08247", "region:us" ]
null
2022-03-02T23:29:04+00:00
text-classification
adapter-transformers
# Adapter `AdapterHub/bert-base-uncased-pf-sst2` for bert-base-uncased An [adapter](https://adapterhub.ml) for the `bert-base-uncased` model that was trained on the [sentiment/sst-2](https://adapterhub.ml/explore/sentiment/sst-2/) dataset and includes a prediction head for classification. This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library. ## Usage First, install `adapter-transformers`: ``` pip install -U adapter-transformers ``` _Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_ Now, the adapter can be loaded and activated like this: ```python from transformers import AutoModelWithHeads model = AutoModelWithHeads.from_pretrained("bert-base-uncased") adapter_name = model.load_adapter("AdapterHub/bert-base-uncased-pf-sst2", source="hf") model.active_adapters = adapter_name ``` ## Architecture & Training The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer. In particular, training configurations for all tasks can be found [here](https://github.com/adapter-hub/efficient-task-transfer/tree/master/run_configs). ## Evaluation results Refer to [the paper](https://arxiv.org/pdf/2104.08247) for more information on results. ## Citation If you use this adapter, please cite our paper ["What to Pre-Train on? Efficient Intermediate Task Selection"](https://arxiv.org/pdf/2104.08247): ```bibtex @inproceedings{poth-etal-2021-pre, title = "{W}hat to Pre-Train on? {E}fficient Intermediate Task Selection", author = {Poth, Clifton and Pfeiffer, Jonas and R{"u}ckl{'e}, Andreas and Gurevych, Iryna}, booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2021", address = "Online and Punta Cana, Dominican Republic", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2021.emnlp-main.827", pages = "10585--10605", } ```
{"language": ["en"], "tags": ["text-classification", "bert", "adapterhub:sentiment/sst-2", "adapter-transformers"]}
AdapterHub/bert-base-uncased-pf-sst2
null
[ "adapter-transformers", "bert", "text-classification", "adapterhub:sentiment/sst-2", "en", "arxiv:2104.08247", "region:us" ]
null
2022-03-02T23:29:04+00:00