Uploaded model

  • Developed by: student-abdullah
  • License: apache-2.0
  • Finetuned from model: meta-llama/Llama-3.2-1B
  • Created on: 29th September, 2024

Acknowledgement


Model Description

This model is fine-tuned from the meta-llama/Llama-3.2-1B base model to enhance its capabilities in generating relevant and accurate responses related to generic medications under the PMBJP scheme. The fine-tuning process included the following hyperparameters:

  • Fine Tuning Template: Llama Q&A
  • Max Tokens: 512
  • LoRA Alpha: 32
  • LoRA Rank (r): 128
  • Learning rate: 1.5e-4
  • Gradient Accumulation Steps: 4
  • Batch Size: 8

Model Quantitative Performace

  • Training Quantitative Loss: 0.1207 (at final 800th epoch)

Limitations

  • Token Limitations: With a max token limit of 512, the model might not handle very long queries or contexts effectively.
  • Training Data Limitations: The model’s performance is contingent on the quality and coverage of the fine-tuning dataset, which may affect its generalizability to different contexts or medications not covered in the dataset.
  • Potential Biases: As with any model fine-tuned on specific data, there may be biases based on the dataset used for training.
Downloads last month
35
Safetensors
Model size
1.24B params
Tensor type
FP16
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for student-abdullah/Llama3.2_Medicine-Hinglish-Dataset_Fine-Tuned_29-09

Finetuned
(183)
this model

Dataset used to train student-abdullah/Llama3.2_Medicine-Hinglish-Dataset_Fine-Tuned_29-09