YAML Metadata
Warning:
empty or missing yaml metadata in repo card
(https://huggingface.co./docs/hub/model-cards#model-card-metadata)
Model Card for t5_small Summarization Model
Model Details
- Model Name:T5-Small Summarization Model
- Model TypeText-to-Text Transformer Model
- Language : English
- License : Apache 2.0
Training Data
- Dataset: The model was fine-tuned on the CNN/DailyMail dataset.
- Dataset Details:
- Contains over 287,000 news articles paired with human-written summaries.
- The content ranges from global news, events, and articles covering various topics.
- Preprocessing:
- Text normalization and tokenization were performed using the T5 tokenizer.
- Input articles were truncated or padded to a maximum length of 512 tokens.
- Summaries were truncated or padded to a maximum length of 150 tokens.
Training Procedure
- Hyperparameters:
- Batch Size: 4
- Learning Rate: 2e-5
- Number of Epochs: 5
- Gradient Accumulation Steps: 4
How to Use
Load the fine-tuned model and tokenizer
tokenizer = T5Tokenizer.from_pretrained('path_to_your_model') model = T5ForConditionalGeneration.from_pretrained('path_to_your_model')
Input text
article = "Your news article text goes here."
Evaluation
Limitations
Ethical Considerations
- Downloads last month
- 0