Edit model card

Dolphin 2.8 Mistral 7b v0.2- GGUF

Description

This model is based on Mistral-7b-v0.2.

The base model has 32k context, and the full-weights fine-tune was with 16k sequence lengths.

Dolphin-2.8 has a variety of instruction, conversational, and coding skills.

Dolphin is uncensored. The dataset was filtered by the creators to remove alignment and bias. This makes the model more compliant. You are advised to implement your own alignment layer before exposing the model as a service. It will be highly compliant to any requests, even unethical ones. You are responsible for any content you create using this model.

Dolphin is licensed Apache 2.0. The creators grant permission for any use including commercial. Dolphin was trained on data generated from GPT4 among other models.

Downloads last month
442
GGUF
Model size
7.24B params
Architecture
llama

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

Inference Examples
Inference API (serverless) has been turned off for this model.

Model tree for QuantFactory/dolphin-2.8-mistral-7b-v02-GGUF