Uploaded model
- Developed by: shivam9980
- License: apache-2.0
- Finetuned from model : unsloth/mistral-7b-instruct-v0.2-bnb-4bit
This mistral model was trained 2x faster with Unsloth and Huggingface's TRL library.
Open LLM Leaderboard Evaluation Results
Detailed results can be found here
Metric | Value |
---|---|
Avg. | 17.08 |
IFEval (0-Shot) | 46.34 |
BBH (3-Shot) | 11.15 |
MATH Lvl 5 (4-Shot) | 1.21 |
GPQA (0-shot) | 7.83 |
MuSR (0-shot) | 15.67 |
MMLU-PRO (5-shot) | 20.31 |
Inference Providers
NEW
This model is not currently available via any of the supported third-party Inference Providers, and
the model is not deployed on the HF Inference API.
Model tree for shivam9980/mistral-7b-news-cnn-merged
Base model
unsloth/mistral-7b-instruct-v0.2-bnb-4bitDataset used to train shivam9980/mistral-7b-news-cnn-merged
Space using shivam9980/mistral-7b-news-cnn-merged 1
Evaluation results
- strict accuracy on IFEval (0-Shot)Open LLM Leaderboard46.340
- normalized accuracy on BBH (3-Shot)Open LLM Leaderboard11.150
- exact match on MATH Lvl 5 (4-Shot)Open LLM Leaderboard1.210
- acc_norm on GPQA (0-shot)Open LLM Leaderboard7.830
- acc_norm on MuSR (0-shot)Open LLM Leaderboard15.670
- accuracy on MMLU-PRO (5-shot)test set Open LLM Leaderboard20.310