gsarti's picture
Update README.md
7d0c28d
|
raw
history blame
3.29 kB
metadata
language:
  - it
license: apache-2.0
tags:
  - italian
  - sequence-to-sequence
  - style-transfer
  - formality-style-transfer
datasets:
  - yahoo/xformal_it
widget:
  - text: Questa performance è a dir poco spiacevole.
  - text: >-
      In attesa di un Suo cortese riscontro, Le auguriamo un piacevole
      proseguimento di giornata.
  - text: Questa visione mi procura una goduria indescrivibile.
  - text: qualora ciò possa interessarti, ti pregherei di contattarmi.
metrics:
  - rouge
  - bertscore
model-index:
  - name: it5-base-formal-to-informal
    results:
      - task:
          type: formality-style-transfer
          name: Formal-to-informal Style Transfer
        dataset:
          type: xformal_it
          name: XFORMAL (Italian Subset)
        metrics:
          - type: rouge1
            value: 0.652
            name: Avg. Test Rouge1
          - type: rouge2
            value: 0.446
            name: Avg. Test Rouge2
          - type: rougeL
            value: 0.632
            name: Avg. Test RougeL
          - type: bertscore
            value: 0.665
            name: Avg. Test BERTScore
            args:
              - model_type: dbmdz/bert-base-italian-xxl-uncased
              - lang: it
              - num_layers: 10
              - rescale_with_baseline: true
              - baseline_path: bertscore_baseline_ita.tsv
co2_eq_emissions:
  emissions: 17g
  source: Google Cloud Platform Carbon Footprint
  training_type: fine-tuning
  geographical_location: Eemshaven, Netherlands, Europe
  hardware_used: 1 TPU v3-8 VM

IT5 Base for Formal-to-informal Style Transfer 🤗

This repository contains the checkpoint for the IT5 Base model fine-tuned on Formal-to-informal style transfer on the Italian subset of the XFORMAL dataset as part of the experiments of the paper IT5: Large-scale Text-to-text Pretraining for Italian Language Understanding and Generation by Gabriele Sarti and Malvina Nissim.

A comprehensive overview of other released materials is provided in the gsarti/it5 repository. Refer to the paper for additional details concerning the reported scores and the evaluation approach.

Using the model

Model checkpoints are available for usage in Tensorflow, Pytorch and JAX. They can be used directly with pipelines as:

from transformers import pipelines

f2i = pipeline("text2text-generation", model='it5/it5-base-formal-to-informal')
f2i("Vi ringrazio infinitamente per vostra disponibilità")
>>> [{"generated_text": "e grazie per la vostra disponibilità!"}]

or loaded using autoclasses:

from transformers import AutoTokenizer, AutoModelForSeq2SeqLM

tokenizer = AutoTokenizer.from_pretrained("it5/it5-base-formal-to-informal")
model = AutoModelForSeq2SeqLM.from_pretrained("it5/it5-base-formal-to-informal")

If you use this model in your research, please cite our work as:

@article{sarti-nissim-2022-it5,
    title={{IT5}: Large-scale Text-to-text Pretraining for Italian Language Understanding and Generation},
    author={Sarti, Gabriele and Nissim, Malvina},
    journal={ArXiv preprint 2203.03759},
    url={https://arxiv.org/abs/2203.03759},
    year={2022},
    month={mar}
}