Llama-3.1-SuperNova-Lite

Overview

Llama-3.1-SuperNova-Lite is an 8B parameter model developed by Arcee.ai, based on the Llama-3.1-8B-Instruct architecture. It is a distilled version of the larger Llama-3.1-405B-Instruct model, leveraging offline logits extracted from the 405B parameter variant. This 8B variation of Llama-3.1-SuperNova maintains high performance while offering exceptional instruction-following capabilities and domain-specific adaptability.

The model was trained using a state-of-the-art distillation pipeline and an instruction dataset generated with EvolKit, ensuring accuracy and efficiency across a wide range of tasks. For more information on its training, visit blog.arcee.ai.

Llama-3.1-SuperNova-Lite excels in both benchmark performance and real-world applications, providing the power of large-scale models in a more compact, efficient form ideal for organizations seeking high performance with reduced resource requirements.

Open LLM Leaderboard Evaluation Results

Detailed results can be found here

Metric Value
Avg. 29.73
IFEval (0-Shot) 80.17
BBH (3-Shot) 31.57
MATH Lvl 5 (4-Shot) 15.48
GPQA (0-shot) 7.49
MuSR (0-shot) 11.67
MMLU-PRO (5-shot) 31.97
Downloads last month
6,719
Safetensors
Model size
8.03B params
Tensor type
BF16
Β·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for arcee-ai/Llama-3.1-SuperNova-Lite

Finetuned
(586)
this model
Finetunes
15 models
Merges
46 models
Quantizations
29 models

Dataset used to train arcee-ai/Llama-3.1-SuperNova-Lite

Spaces using arcee-ai/Llama-3.1-SuperNova-Lite 7

Evaluation results