SwahiliInstruct-v0.2
This is a Mistral model that has been fine-tuned on the Swahili Alpaca dataset for 3 epochs.
Prompt Template
### Maelekezo:
{query}
### Jibu:
<Leave new line for model to respond>
Usage
# Load model directly
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("mwitiderrick/SwahiliInstruct-v0.2")
model = AutoModelForCausalLM.from_pretrained("mwitiderrick/SwahiliInstruct-v0.2", device_map="auto")
query = "Nipe maagizo ya kutengeneza mkate wa mandizi"
text_gen = pipeline(task="text-generation", model=model, tokenizer=tokenizer, max_length=200, do_sample=True, repetition_penalty=1.1)
output = text_gen(f"### Maelekezo:\n{query}\n### Jibu:\n")
print(output[0]['generated_text'])
"""
Maagizo ya kutengeneza mkate wa mandazi:
1. Preheat tanuri hadi 375°F (190°C).
2. Paka sufuria ya uso na siagi au jotoa sufuria.
3. Katika bakuli la chumvi, ongeza viungo vifuatavyo: unga, sukari ya kahawa, chumvi, mdalasini, na unga wa kakao.
Koroga mchanganyiko pamoja na mbegu za kikombe 1 1/2 za mtindi wenye jamii na hatua ya maji nyepesi.
4. Kando ya uwanja, changanya zaini ya yai 2
"""
Open LLM Leaderboard Evaluation Results
Detailed results can be found here
Metric | Value |
---|---|
Avg. | 54.25 |
AI2 Reasoning Challenge (25-Shot) | 55.20 |
HellaSwag (10-Shot) | 78.22 |
MMLU (5-Shot) | 50.30 |
TruthfulQA (0-shot) | 57.08 |
Winogrande (5-shot) | 73.24 |
GSM8k (5-shot) | 11.45 |
- Downloads last month
- 18
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.
Model tree for mwitiderrick/SwahiliInstruct-v0.2
Base model
mistralai/Mistral-7B-Instruct-v0.2Dataset used to train mwitiderrick/SwahiliInstruct-v0.2
Evaluation results
- normalized accuracy on AI2 Reasoning Challenge (25-Shot)test set Open LLM Leaderboard55.200
- normalized accuracy on HellaSwag (10-Shot)validation set Open LLM Leaderboard78.220
- accuracy on MMLU (5-Shot)test set Open LLM Leaderboard50.300
- mc2 on TruthfulQA (0-shot)validation set Open LLM Leaderboard57.080
- accuracy on Winogrande (5-shot)validation set Open LLM Leaderboard73.240
- accuracy on GSM8k (5-shot)test set Open LLM Leaderboard11.450