File size: 2,555 Bytes
f8c9695 f466021 09f5e60 f466021 7b50b0a f466021 70ec222 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 |
---
license: llama3.1
datasets:
- OpenCoder-LLM/opc-sft-stage1
- OpenCoder-LLM/opc-sft-stage2
- microsoft/orca-agentinstruct-1M-v1
- microsoft/orca-math-word-problems-200k
- NousResearch/hermes-function-calling-v1
- AI-MO/NuminaMath-CoT
- AI-MO/NuminaMath-TIR
- allenai/tulu-3-sft-mixture
- cognitivecomputations/dolphin-coder
- HuggingFaceTB/smoltalk
- cognitivecomputations/samantha-data
- m-a-p/CodeFeedback-Filtered-Instruction
- m-a-p/Code-Feedback
language:
- en
base_model:
- meta-llama/Llama-3.1-8B
---
# Dolphin 3.0 Llama 3.1 8B 🐬
Curated and trained by [Eric Hartford](https://huggingface.co./ehartford), [Ben Gitter](https://huggingface.co./bigstorm), [BlouseJury](https://huggingface.co./BlouseJury) and [Cognitive Computations](https://huggingface.co./cognitivecomputations)
[![Discord](https://img.shields.io/discord/1156064224225808488?logo=Discord&logoColor=%23ffffff&label=Discord&link=https%3A%2F%2Fdiscord.gg%2FtCMkMDDHwm)](https://discord.gg/cognitivecomputations)
Discord: https://discord.gg/cognitivecomputations
<img src="https://cdn-uploads.huggingface.co/production/uploads/63111b2d88942700629f5771/cNCs1TBD3FelWCJGkZ3cd.png" width="600" />
Our appreciation for the generous sponsors of Dolphin 3.0:
- [Crusoe Cloud](https://crusoe.ai/) - provided 16x L40s for training and evals
- [Akash](https://akash.network/) - provided on-demand 8x H100 for training
- [Lazarus](https://www.lazarusai.com/) - provided 16x H100 for training
- [Cerebras](https://cerebras.ai/) - provided excellent and fast inference services for data labeling
- [Andreessen Horowitz](https://a16z.com/) - provided a grant that make Dolphin 1.0 possible and enabled me to bootstrap my homelab
Appreciation to the creators of the open source datasets that were used:
- [OpenCoder-LLM](https://huggingface.co./OpenCoder-LLM) (opc-sft-stage1, opc-sft-stage2)
- [microsoft](https://huggingface.co./OpenCoder-LLM) (orca-agentinstruct-1M-v1, orca-math-word-problems-200k)
- [NousResearch](https://huggingface.co./NousResearch) (hermes-function-calling-v1)
- [AI-MO](https://huggingface.co./AI-MO) (NuminaMath-CoT, NuminaMath-TIR)
- [allenai](https://huggingface.co./allenai) (tulu-3-sft-mixture)
- [HuggingFaceTB](https://huggingface.co./HuggingFaceTB) (smoltalk)
- [m-a-p](https://huggingface.co./m-a-p) (CodeFeedback-Filtered-Instruction, Code-Feedback)
Special thanks to
- Meta, Qwen, and OpenCoder, who wrote papers that were instrumental in creating this.
- [RLHFlow](https://huggingface.co./RLHFlow) for the excellent reward model used to filter the datasets
|