--- tags: - summarization - summary - booksum - long-document - long-form - lsg datasets: - kmfoda/booksum metrics: - rouge model-index: - name: ccdv/lsg-bart-base-4096-booksum results: [] --- **Transformers >= 4.36.1**\ **This model relies on a custom modeling file, you need to add trust_remote_code=True**\ **See [\#13467](https://github.com/huggingface/transformers/pull/13467)** LSG ArXiv [paper](https://arxiv.org/abs/2210.15497). \ Github/conversion script is available at this [link](https://github.com/ccdv-ai/convert_checkpoint_to_lsg). ```python from transformers import AutoTokenizer, AutoModelForSeq2SeqLM, pipeline tokenizer = AutoTokenizer.from_pretrained("ccdv/lsg-bart-base-4096-booksum", trust_remote_code=True) model = AutoModelForSeq2SeqLM.from_pretrained("ccdv/lsg-bart-base-4096-booksum", trust_remote_code=True) text = "Replace by what you want." pipe = pipeline("text2text-generation", model=model, tokenizer=tokenizer, device=0) generated_text = pipe( text, truncation=True, max_length=64, no_repeat_ngram_size=7, num_beams=2, early_stopping=True ) ``` # ccdv/lsg-bart-base-4096-booksum This model is a fine-tuned version of [ccdv/lsg-bart-base-4096](https://huggingface.co./ccdv/lsg-bart-base-4096) on the kmfoda/booksum kmfoda--booksum dataset. It achieves the following results on the evaluation set: - eval_loss: 3.2654 - eval_rouge1: 33.9468 - eval_rouge2: 6.7034 - eval_rougeL: 16.7879 - eval_rougeLsum: 31.7677 - eval_gen_len: 427.6918 - eval_runtime: 2910.3841 - eval_samples_per_second: 0.492 - eval_steps_per_second: 0.062 - eval_samples: 1431 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 8e-05 - train_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 30.0 ### Generate hyperparameters The following hyperparameters were used during generation: - dataset_name: kmfoda/booksum - dataset_config_name: kmfoda--booksum - eval_batch_size: 8 - eval_samples: 1431 - early_stopping: True - ignore_pad_token_for_loss: True - length_penalty: 2.0 - max_length: 512 - min_length: 128 - num_beams: 5 - no_repeat_ngram_size: None - seed: 123 ### Framework versions - Transformers 4.36.1 - Pytorch 1.12.1 - Datasets 2.3.2 - Tokenizers 0.11.6