therealcyberlord's picture
Update README.md
4dc412e verified
metadata
library_name: peft
base_model: TinyLlama/TinyLlama-1.1B-Chat-v1.0
datasets:
  - BI55/MedText
  - keivalya/MedQuad-MedicalQnADataset
pipeline_tag: text-generation

TinyLlama 1.1B Medical πŸ€πŸ¦™

Model Description

A smaller version of https://huggingface.co./therealcyberlord/llama2-qlora-finetuned-medical, which used Llama 2 7B.

Finetuned on <|user|> <|assistant|> instructions

How to Get Started with the Model

from peft import PeftModel, PeftConfig
from transformers import AutoModelForCausalLM

config = PeftConfig.from_pretrained("therealcyberlord/TinyLlama-1.1B-Medical")
model = AutoModelForCausalLM.from_pretrained("TinyLlama/TinyLlama-1.1B-Chat-v1.0")
model = PeftModel.from_pretrained(model, "therealcyberlord/TinyLlama-1.1B-Medical")

Training Details

Training Data

Used two data sources:

BI55/MedText: https://huggingface.co./datasets/BI55/MedText

MedQuad-MedicalQnADataset: https://huggingface.co./datasets/keivalya/MedQuad-MedicalQnADataset

Training Procedure

Trained on 1000 steps on a shuffled combined dataset

Framework versions

  • PEFT 0.7.2.dev0