README.md exists but content is empty.
Inference Providers
NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API:
The model has no library tag.
Evaluation results
- normalized accuracy on AI2 Reasoning Challenge (25-Shot)test set Open LLM Leaderboard0.620
- normalized accuracy on HellaSwag (10-Shot)validation set Open LLM Leaderboard0.844
- mc2 on TruthfulQA (0-shot)validation set Open LLM Leaderboard0.574
- accuracy on GSM8k (5-shot)test set Open LLM Leaderboard0.127
- accuracy on MMLU (5-Shot)test set Open LLM Leaderboard0.611
- accuracy on Winogrande (5-shot)validation set Open LLM Leaderboard0.777