--- license: llama3 datasets: - arcee-ai/EvolKit-20k language: - en base_model: meta-llama/Meta-Llama-3.1-8B-Instruct --- [![QuantFactory Banner](https://lh7-rt.googleusercontent.com/docsz/AD_4nXeiuCm7c8lEwEJuRey9kiVZsRn2W-b4pWlu3-X534V3YmVuVc2ZL-NXg2RkzSOOS2JXGHutDuyyNAUtdJI65jGTo8jT9Y99tMi4H4MqL44Uc5QKG77B0d6-JfIkZHFaUA71-RtjyYZWVIhqsNZcx8-OMaA?key=xt3VSDoCbmTY7o-cwwOFwQ)](https://hf.co/QuantFactory) # QuantFactory/Llama-3.1-SuperNova-Lite-GGUF This is quantized version of [arcee-ai/Llama-3.1-SuperNova-Lite](https://huggingface.co./arcee-ai/Llama-3.1-SuperNova-Lite) created using llama.cpp # Original Model Card
Llama-3.1-SuperNova-Lite
## Overview Llama-3.1-SuperNova-Lite is an 8B parameter model developed by Arcee.ai, based on the Llama-3.1-8B-Instruct architecture. It is a distilled version of the larger Llama-3.1-405B-Instruct model, leveraging offline logits extracted from the 405B parameter variant. This 8B variation of Llama-3.1-SuperNova maintains high performance while offering exceptional instruction-following capabilities and domain-specific adaptability. The model was trained using a state-of-the-art distillation pipeline and an instruction dataset generated with [EvolKit](https://github.com/arcee-ai/EvolKit), ensuring accuracy and efficiency across a wide range of tasks. For more information on its training, visit blog.arcee.ai. Llama-3.1-SuperNova-Lite excels in both benchmark performance and real-world applications, providing the power of large-scale models in a more compact, efficient form ideal for organizations seeking high performance with reduced resource requirements. # Evaluations We will be submitting this model to the OpenLLM Leaderboard for a more conclusive benchmark - but here are our internal benchmarks using the main branch of lm evaluation harness: | Benchmark | SuperNova-Lite | Llama-3.1-8b-Instruct | |-------------|----------------|----------------------| | IF_Eval | 81.1 | 77.4 | | MMLU Pro | 38.7 | 37.7 | | TruthfulQA | 64.4 | 55.0 | | BBH | 51.1 | 50.6 | | GPQA | 31.2 | 29.02 | The script used for evaluation can be found inside this repository under /eval.sh, or click [here](https://huggingface.co./arcee-ai/Llama-3.1-SuperNova-Lite/blob/main/eval.sh) # note This readme will be edited regularly on September 10, 2024 (the day of release). After the final readme is in place we will remove this note.