dnzblgn commited on
Commit
a65cba8
·
verified ·
1 Parent(s): 0d7085c

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +39 -3
README.md CHANGED
@@ -1,3 +1,39 @@
1
- ---
2
- license: apache-2.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ language:
4
+ - en
5
+ base_model:
6
+ - google-t5/t5-base
7
+ pipeline_tag: summarization
8
+ ---
9
+
10
+ **Model Name:** LoRA Fine-Tuned Model for Dialogue Summarization
11
+ **Model Type:** Seq2Seq with Low-Rank Adaptation (LoRA)
12
+ **Base Model:** `google/t5-base`
13
+
14
+ ## Model Details
15
+ - **Architecture**: T5-base
16
+ - **Finetuning Technique**: LoRA (Low-Rank Adaptation)
17
+ - **PEFT Method**: Parameter Efficient Fine-Tuning
18
+ - **Data**: samsumdataset
19
+ - **Metrics**: Evaluated using ROUGE (ROUGE-1, ROUGE-2, ROUGE-L, ROUGE-Lsum)
20
+
21
+ ## Intended Use
22
+ This model is designed for summarizing dialogues, such as conversations between individuals in a chat or messaging context. It’s suitable for applications in:
23
+ - **Customer Service**: Summarizing chat logs for quality monitoring or training.
24
+ - **Messaging Apps**: Generating conversation summaries for user convenience.
25
+ - **Content Creation**: Assisting writers by summarizing character dialogues.
26
+
27
+ ## Training Process
28
+
29
+ Optimizer: AdamW with learning rate 3e-5
30
+
31
+ Batch Size: 4 (gradient accumulation steps of 2)
32
+
33
+ Training Epochs: 2
34
+
35
+ Evaluation Metrics: ROUGE-1, ROUGE-2, ROUGE-L, ROUGE-Lsum
36
+
37
+ Hardware: Trained on a single GPU with mixed precision to optimize performance.
38
+
39
+ The model was trained using the Seq2SeqTrainer class from transformers, with LoRA parameters applied to selected attention layers to reduce computation without compromising accuracy.