Text Generation
Transformers
PyTorch
Safetensors
mistral
text-generation-inference
Inference Endpoints

Wukong-0.1-Mistral-7B-v0.2

Join Our Discord! https://discord.gg/cognitivecomputations

image/jpeg

Wukong-0.1-Mistral-7B-v0.2 is a dealigned chat finetune of the original fantastic Mistral-7B-v0.2 model by the Mistral team.

This model was trained on the teknium OpenHeremes-2.5 dataset, code datasets from Multimodal Art Projection https://m-a-p.ai, and the Dolphin dataset from Cognitive Computations https://erichartford.com/dolphin 🐬

This model was trained for 3 epochs over 4 4090's.

Example Outputs

TBD

Built with Axolotl

Downloads last month
24
Safetensors
Model size
7.24B params
Tensor type
BF16
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for hflog/RESMPDEV-Wukong-0.1-Mistral-7B-v0.2

Quantizations
2 models

Datasets used to train hflog/RESMPDEV-Wukong-0.1-Mistral-7B-v0.2