SLM for Customer Support Interactions
Description
This model is a fine-tuned version of DistilGPT-2, optimized for customer support interactions. The model has been trained on a dataset consisting of dialogues between customers and support agents to enhance conversational AI performance.π€π¬
- Model type: Transformer-based small language model (SLM)
- Language(s) (NLP): English
- Finetuned from model : DistilGPT-2
Uses
The fine-tuned DistilGPT-2(SLM) is designed to enhance customer support interactions by generating accurate and contextually relevant responses. It can be integrated into customer service chatbots, virtual assistants, and automated helpdesk systems to handle routine inquiries efficiently. By leveraging this model, businesses can improve response times, reduce human agent workload, and ensure consistent communication with customers.
Out-of-Scope Use
β Should not be used for general conversational AI applications unrelated to customer service.
Recommendations
Users should validate outputs before deploying them in live customer support environments and ensure regular updates to align with evolving support needs.
How to Get Started with the Model
Use the code below to get started with the model.
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("novumlogic/nl-slm-distilgpt2-customer-support")
model = AutoModelForCausalLM.from_pretrained("novumlogic/nl-slm-distilgpt2-customer-support")
input_str = "payment options"
# Encode the input string with padding and attention mask
encoded_input = tokenizer.encode_plus(
input_str,
return_tensors='pt',
padding=True,
truncation=True,
max_length=50 # Adjust max_length as needed
)
# Move tensors to the appropriate device
input_ids = encoded_input['input_ids']
attention_mask = encoded_input['attention_mask']
# Set the pad_token_id to the tokenizer's eos_token_id
pad_token_id = tokenizer.eos_token_id
# Generate the output
output = model.generate(
input_ids,
attention_mask=attention_mask,
max_length=400, # Adjust max_length as needed
num_return_sequences=1,
do_sample=True,
top_k=8,
top_p=0.95,
temperature=0.5,
repetition_penalty=1.2,
pad_token_id=pad_token_id
)
# Decode and print the output
decoded_output = tokenizer.decode(output[0], skip_special_tokens=True)
print(decoded_output)
Training Details
Training Data
π Customer Support Interactions Dataset: 26,000 rows (20,800 training, 5,200 validation) (https://huggingface.co./datasets/bitext/Bitext-customer-support-llm-chatbot-training-dataset)
Training Procedure
Preprocessing
π§Ή Data cleaning: Standardizing text and removing noise.
βοΈ Tokenization: Used DistilGPT-2's tokenizer for sequence conversion.
π Formatting: Structuring as "Query | Response" pairs.
Training Hyperparameters
Training regime:
π Batch size: 15
π Epochs: 3
π οΈ Optimizer: Adam with a linear learning rate scheduler
π₯οΈ Training Frameworks: PyTorch, Hugging Face Transformers
Results
Dataset | ROUGE-1 | ROUGE-2 | ROUGE-L | METEOR | Perplexity |
---|---|---|---|---|---|
π Customer Support Interactions | 0.7102 | 0.4586 | 0.5610 | 0.6924 | 1.4273 |
Summary
The Fine-Tuned DistilGPT-2 SLM for Customer Support Interactions is a compact and efficient language model designed to enhance automated customer service. Trained on 26,000 customer-agent dialogues, the model improves chatbot performance by generating accurate, context-aware responses to customer queries.
Glossary
SLM (Small Language Model): A compact language model optimized for efficiency.
Perplexity: Measures how well a model predicts.
ROUGE & METEOR: Metrics for evaluating text generation quality.
Author
Novumlogic Technologies Pvt Ltd
- Downloads last month
- 79
Model tree for novumlogic/nl-slm-distilgpt2-customer-support
Base model
distilbert/distilgpt2