pegasus-large-finetuned-cnn_dailymail

This model is a fine-tuned version of google/pegasus-large on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 1.0469
  • Rouge1: 45.2373
  • Rouge2: 22.4813
  • Rougel: 31.8329
  • Rougelsum: 41.6862
  • Bleu 1: 34.8304
  • Bleu 2: 23.4162
  • Bleu 3: 17.4357
  • Meteor: 35.0815
  • Lungime rezumat: 56.5898
  • Lungime original: 48.7656

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5.6e-05
  • train_batch_size: 8
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 4

Training results

Training Loss Epoch Step Validation Loss Rouge1 Rouge2 Rougel Rougelsum Bleu 1 Bleu 2 Bleu 3 Meteor Lungime rezumat Lungime original
1.2058 1.0 7165 1.0640 44.5245 22.1439 31.4275 40.9808 33.9802 22.8183 16.9712 34.1035 55.245 48.7656
1.0602 2.0 14330 1.0534 44.7088 22.1286 31.3398 41.0804 34.1571 22.9231 17.0479 35.1782 59.6166 48.7656
1.0144 3.0 21495 1.0479 45.0257 22.3325 31.7313 41.4189 34.6084 23.227 17.2859 34.7757 56.1443 48.7656
0.9875 4.0 28660 1.0469 45.2373 22.4813 31.8329 41.6862 34.8304 23.4162 17.4357 35.0815 56.5898 48.7656

Framework versions

  • Transformers 4.40.0
  • Pytorch 2.2.2+cu118
  • Datasets 2.19.0
  • Tokenizers 0.19.1
Downloads last month
34
Safetensors
Model size
571M params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for CyrexPro/pegasus-large-finetuned-cnn_dailymail

Finetuned
(54)
this model