# Model Card for LLaMA-2 Fine-Tuned on Agriculture Dataset
This model is a fine-tuned version of the meta-llama/Llama-2-7b-hf
base model, optimized for agricultural-related instructions and queries. The fine-tuning process utilized the dataset Mahesh2841/Agriculture
to improve the model's ability to answer questions related to agricultural practices, crop management, and related topics.
Model Details
Model Description
- Developed by: Fine-tuned by me (bagasbgs2516)
- Shared by: PT. Clevio
- Model type: Causal Language Model
- Language(s): English
- License: LLaMA-2 Community License Agreement
- Finetuned from model: meta-llama/Llama-2-7b-hf
Model Sources
Uses
Direct Use
The model is suitable for:
- Answering questions related to agriculture.
- Providing instructions on crop management, soil fertility, pest control, and other farming-related tasks.
Downstream Use
This model can be further fine-tuned or adapted for specific agricultural tasks, such as:
- Developing chatbots for farmers.
- Generating FAQ systems for agricultural platforms.
- Enhancing agricultural extension services.
Out-of-Scope Use
The model is not suitable for:
- Topics outside of agriculture.
- Tasks requiring precision in non-agricultural domains, as its performance may be unreliable.
Bias, Risks, and Limitations
Recommendations
Users should be aware of the following limitations:
- Bias: The dataset may reflect biases inherent in its source, leading to occasional inaccuracies.
- Risk: Outputs should be verified by agricultural experts before implementation in critical scenarios.
- Limitation: The model is fine-tuned for English and may not perform well with other languages.
How to Get Started with the Model
Use the code below to load and use the model:
from transformers import AutoTokenizer, AutoModelForCausalLM
from peft import PeftModel
base_model_path = "meta-llama/Llama-2-7b-hf"
lora_model_path = "bagasbgs2516/llama2-agriculture-lora"
# Load base model
base_model = AutoModelForCausalLM.from_pretrained(base_model_path)
# Apply LoRA fine-tuning
model = PeftModel.from_pretrained(base_model, lora_model_path)
# Load tokenizer
tokenizer = AutoTokenizer.from_pretrained(base_model_path)
# Generate response
input_text = "What are the best practices for improving soil fertility?"
input_ids = tokenizer.encode(input_text, return_tensors="pt")
output_ids = model.generate(input_ids, max_length=200)
response = tokenizer.decode(output_ids[0], skip_special_tokens=True)
print(response)
Training Details
Training Data
The model was fine-tuned on the dataset Mahesh2841/Agriculture, which contains agricultural-related instructions, inputs, and responses.
Training Procedure
Framework: Hugging Face Transformers with PEFT. Precision: Mixed precision (fp16) for faster training. Hardware: NVIDIA A100-SXM4-40GB GPUs. Epochs: 3 Batch size: 1 (gradient accumulation steps: 8) Learning rate: 2e-4
Citation
If you use this model in your work, please cite it as follows: @misc{Bagas, 2024, title={LLaMA-2 Fine-Tuned on Agriculture Dataset}, author={Bagas}, year={2024}, publisher={Hugging Face}, url={https://huggingface.co./bagas2516/llama2-agriculture-lora}, }
Framework versions
- PEFT 0.13.2
- Downloads last month
- 205
Model tree for bagasbgs2516/llama2-agriculture-lora
Base model
meta-llama/Llama-2-7b-hf