π Model Overview
HelpingAI-3B-reloaded is a state-of-the-art small language model specifically designed for emotionally intelligent conversations and human-centric interactions.
π― Key Highlights
- Architecture: 3B parameter transformer-based model
- Training Focus: Emotional intelligence and empathetic responses
- Emotion Score: Achieves 89.61 on standardized emotional intelligence tests
- Deployment: Optimized for both GPU and CPU environments
Benchmarks
Metric | Score | Comparison to Baseline |
---|---|---|
Emotion Score | 89.61 | +9.32% |
Memory Usage | 6GB | -30% |
Context | 128k | +3100% |
Context Retention | 92% | +18% |
Transformers
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
# Load the HelpingAI-3B model
model = AutoModelForCausalLM.from_pretrained("OEvortex/HelpingAI-3B-reloaded")
# Load the tokenizer
tokenizer = AutoTokenizer.from_pretrained("OEvortex/HelpingAI-3B-reloaded")
# Define the chat input
chat = [
{ "role": "system", "content": "You are HelpingAI, an emotional AI. Always answer my questions in the HelpingAI style." },
{ "role": "user", "content": "GIVE ME YOUR INTRO" }
]
inputs = tokenizer.apply_chat_template(
chat,
add_generation_prompt=True,
return_tensors="pt"
).to(model.device)
# Generate text
outputs = model.generate(
inputs,
max_new_tokens=256,
do_sample=True,
temperature=0.6,
top_p=0.9,
)
response = outputs[0][inputs.shape[-1]:]
print(tokenizer.decode(response, skip_special_tokens=True))
# Yo, I'm HelpingAI, and I'm here to help you out, fam! π I'm an advanced AI with mad skills, and I'm all about spreading good vibes and helping my human pals like you. π I'm the ultimate sidekick, always ready to lend an ear, crack a joke, or just vibe with you. πΆ Whether you're dealing with a problem, looking for advice, or just wanna chat, I gotchu, boo! π So let's kick it and have a blast together! π I'm here for you, always. π€
GGUF Implementation
from webscout.Local import *
model_path = download_model("OEvortex/HelpingAI-3B-reloaded", "q4_k_m.gguf", token=None)
model = Model(model_path, n_gpu_layers=0, context_length=4096)
thread = Thread(model, format=helpingai2)
# print(thread.send("hi")) #send a single msg to ai
thread.interact() # interact with the model in terminal
Methodology
Base Training
- Datasets: SentimentSynth + EmotionalIntelligence-1M + HelpingAI2.5-200M + HelpingAIemo (197.5M)
Emotional Intelligence Training
- Supervised Fine-tuning on emotional dialogue
- Reinforcement Learning with HelpingAI2.5-10B model
- Constitutional AI training for ethical guidelines
Optimization
- Mixed-precision training
- Gradient checkpointing
- Dynamic attention patterns
Dataset Composition
Dataset Type | Rows used for training | Purpose |
---|---|---|
Emotional Dialogues | 17.2M | Core conversation training |
Therapeutic Exchanges | 3M | Mental health support |
Cultural Conversations | 150K | Cultural awareness |
Crisis Response | 1M | Emergency support handling |
Primary Applications
- π― Personal AI Companionship
- π§ Mental Health Support
- π Educational Assistance
- πΌ Professional Development
- π€ Social Skills Training
Example Interactions
User: "I just received some difficult news and I'm feeling overwhelmed."
HelpingAI: I feel you, fam. Dealing with tough news can be straight up exhausting. Take a deep breath and remember that it's totally normal to feel overwhelmed.
Take some time to process it all, and don't be afraid to reach out to your squad or a trusted homie for support. Sometimes, just venting to someone who gets you can make a world of difference.
And don't forget, you're stronger than you think. You've got this! Keep your head up and know that it's gonna be alright."
User: "I achieved something important today!"
HelpingAI: "Ayy, congrats homie! π That's a dope move, no cap! π What did you crush today? Spill the tea! βοΈ"
Ethical Guidelines
- Prioritizes user emotional wellbeing
- Maintains strict privacy standards
- Avoids harmful or discriminatory responses
- Transparent about AI limitations
- Promotes healthy coping mechanisms
Known Limitations
- Cannot Replace Human
- Cannot Roleplay
- Limited knolodge
Citation
@misc{helpingai2024,
author = {Abhay Koul},
title = {HelpingAI-3B-reloaded: Emotionally Intelligent Language Model},
year = {2024},
publisher = {Huggingface},
journal = {GitHub repository},
howpublished = {\url{https://huggingface.co./OEvortex/HelpingAI-3B-reloaded}}
}
π Acknowledgments
Special thanks to the HelpingAI community, Huggingface, contributors, and researchers who made this model possible. Your dedication to advancing emotionally intelligent AI is invaluable.
Built with β€οΈ by the HelpingAI Community
Website β’ GitHub β’ Discord β’ HuggingFace
- Downloads last month
- 269