Bert2Bert Summarization with 🤗EncoderDecoder Framework This model is a warm-started BERT2BERT model fine-tuned on the CNN/Dailymail summarization dataset.
The model achieves a 18.22 ROUGE-2 score on CNN/Dailymail's test dataset.
For more details on how the model was fine-tuned, please refer to this notebook.
- Downloads last month
- 2,579
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.
Dataset used to train patrickvonplaten/bert2bert_cnn_daily_mail
Spaces using patrickvonplaten/bert2bert_cnn_daily_mail 2
Evaluation results
- ROUGE-1 on cnn_dailymailtest set self-reported41.281
- ROUGE-2 on cnn_dailymailtest set self-reported18.685
- ROUGE-L on cnn_dailymailtest set self-reported28.191
- ROUGE-LSUM on cnn_dailymailtest set self-reported38.087
- loss on cnn_dailymailtest set self-reported2.345
- gen_len on cnn_dailymailtest set self-reported73.833