Model Description

This is fine-tuned model based on EmbeddedLLM/Mistral-7B-Merge-14-v0.3 for 3 epochs. The dataset used are:

  • dophin
  • dolphin-coder
  • Magicoder-OSS-Instruct-75K
  • openhermes
  • Synthia-v1.3

Chat Template

Prompt format: This model uses ChatML prompt format.

<|im_start|>system
You are Dolphin, a helpful AI assistant.<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant

Training

The model has been fine-tuned for 3 epochs on 4 A100s using axolotl.

Shout-Out to OSS

Thank you to the Open Source AI community for bringing together marvelous code frameworks and datasets.

Downloads last month
85
Safetensors
Model size
7.24B params
Tensor type
BF16
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for EmbeddedLLM/Mistral-7B-Merge-14-v0.3-ft-step-15936

Datasets used to train EmbeddedLLM/Mistral-7B-Merge-14-v0.3-ft-step-15936