Calcium-Opus-14B-Elite-Stock
Calcium-Opus-14B-Elite is based on the Qwen 2.5 14B modality architecture, designed to enhance the reasoning capabilities of 14B-parameter models. These models have proven effective in context understanding, reasoning, and mathematical problem-solving.It has been fine-tuned using a long chain-of-thought reasoning model and specialized datasets, with a focus on chain-of-thought (CoT) reasoning for problem-solving. This model is optimized for tasks requiring logical reasoning, detailed explanations, and multi-step problem-solving, making it ideal for applications such as instruction-following, text generation, and complex reasoning tasks.
merge
This is a merge of pre-trained language models created using mergekit.
Merge Method
This model was merged using the Model Stock merge method using prithivMLmods/Calcium-Opus-14B-Elite as a base.
Models Merged
The following models were included in the merge:
- prithivMLmods/Calcium-Opus-14B-Elite4
- prithivMLmods/Calcium-Opus-14B-Elite3
- prithivMLmods/Calcium-Opus-14B-Elite2
Configuration
The following YAML configuration was used to produce this model:
models:
- model: prithivMLmods/Calcium-Opus-14B-Elite
- model: prithivMLmods/Calcium-Opus-14B-Elite2
- model: prithivMLmods/Calcium-Opus-14B-Elite3
- model: prithivMLmods/Calcium-Opus-14B-Elite4
merge_method: model_stock
base_model: prithivMLmods/Calcium-Opus-14B-Elite
parameters:
normalize: false
int8_mask: true
dtype: bfloat16
tokenizer_source: "prithivMLmods/Calcium-Opus-14B-Elite"
Open LLM Leaderboard Evaluation Results
Detailed results can be found here! Summarized results can be found here!
Metric | Value (%) |
---|---|
Average | 36.49 |
IFEval (0-Shot) | 61.43 |
BBH (3-Shot) | 46.90 |
MATH Lvl 5 (4-Shot) | 27.19 |
GPQA (0-shot) | 15.77 |
MuSR (0-shot) | 20.06 |
MMLU-PRO (5-shot) | 47.60 |
- Downloads last month
- 16
Model tree for prithivMLmods/Calcium-Opus-14B-Elite-Stock
Evaluation results
- averaged accuracy on IFEval (0-Shot)Open LLM Leaderboard61.430
- normalized accuracy on BBH (3-Shot)test set Open LLM Leaderboard46.900
- exact match on MATH Lvl 5 (4-Shot)test set Open LLM Leaderboard27.190
- acc_norm on GPQA (0-shot)Open LLM Leaderboard15.770
- acc_norm on MuSR (0-shot)Open LLM Leaderboard20.060
- accuracy on MMLU-PRO (5-shot)test set Open LLM Leaderboard47.600