MoECPM Untrained 4x2b
Model Details
Model Description
A MoE model out of 4 MiniCPM-2B-sft models. Intended to be trained. This version probably does not perform well (if it works at all, lol. I haven't tested it).
Uses
- Training
Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
Open LLM Leaderboard Evaluation Results
Detailed results can be found here
Metric | Value |
---|---|
Avg. | 53.51 |
AI2 Reasoning Challenge (25-Shot) | 46.76 |
HellaSwag (10-Shot) | 72.58 |
MMLU (5-Shot) | 53.21 |
TruthfulQA (0-shot) | 38.41 |
Winogrande (5-shot) | 65.51 |
GSM8k (5-shot) | 44.58 |
- Downloads last month
- 71
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.
Model tree for Inv/MoECPM-Untrained-4x2b
Base model
openbmb/MiniCPM-2B-sft-bf16-llama-formatEvaluation results
- normalized accuracy on AI2 Reasoning Challenge (25-Shot)test set Open LLM Leaderboard46.760
- normalized accuracy on HellaSwag (10-Shot)validation set Open LLM Leaderboard72.580
- accuracy on MMLU (5-Shot)test set Open LLM Leaderboard53.210
- mc2 on TruthfulQA (0-shot)validation set Open LLM Leaderboard38.410
- accuracy on Winogrande (5-shot)validation set Open LLM Leaderboard65.510
- accuracy on GSM8k (5-shot)test set Open LLM Leaderboard44.580