MedLLM
Model Description
MedLLM is a language model designed to assist healthcare professionals and patients by providing detailed information about the symptoms of various diseases. This model can help in preliminary assessments and educational purposes by offering accurate and concise symptom descriptions.
Model Details
- Architecture: Transformer-based architecture optimized for medical text.
- Training Data: Trained on a diverse set of medical texts, including clinical notes, research articles, and symptom databases, to ensure a comprehensive understanding of disease symptoms.
- Training Procedure: Fine-tuned using supervised learning techniques with a focus on accuracy and relevancy in medical contexts. Hyperparameters were adjusted for optimal performance in natural language understanding and generation.
Usage
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "shivvamm/MedLLM"
model = AutoModelForCausalLM.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)
inputs = tokenizer("What are the symptoms of diabetes?", return_tensors="pt")
outputs = model.generate(**inputs)
response = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(response)
- Downloads last month
- 56
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.