--- license: llama3.1 datasets: - OpenCoder-LLM/opc-sft-stage1 - OpenCoder-LLM/opc-sft-stage2 - microsoft/orca-agentinstruct-1M-v1 - microsoft/orca-math-word-problems-200k - NousResearch/hermes-function-calling-v1 - AI-MO/NuminaMath-CoT - AI-MO/NuminaMath-TIR - allenai/tulu-3-sft-mixture - cognitivecomputations/dolphin-coder - HuggingFaceTB/smoltalk - cognitivecomputations/samantha-data - m-a-p/CodeFeedback-Filtered-Instruction - m-a-p/Code-Feedback language: - en base_model: - meta-llama/Llama-3.1-8B --- # Dolphin 3.0 Llama 3.1 8B 🐬 Curated and trained by [Eric Hartford](https://huggingface.co./ehartford), [Ben Gitter](https://huggingface.co./bigstorm) and [Cognitive Computations](https://huggingface.co./cognitivecomputations) [![Discord](https://img.shields.io/discord/1156064224225808488?logo=Discord&logoColor=%23ffffff&label=Discord&link=https%3A%2F%2Fdiscord.gg%2FtCMkMDDHwm)](https://discord.gg/cognitivecomputations) Discord: https://discord.gg/cognitivecomputations Our appreciation for the generous sponsors of Dolphin 3.0: - [Crusoe Cloud](https://crusoe.ai/) - provided 16x L40s for training and evals - [Akash](https://akash.network/) - provided on-demand 8x H100 for training - [Lazarus](https://www.lazarusai.com/) - provided 16x H100 for training - [Cerebrus](https://cerebras.ai/) - provided excellent and fast inference services - [Andreessen Horowitz](https://a16z.com/) - provided a grant that make Dolphin 1.0 possible and enabled me to bootstrap my homelab