Edit model card

Chronos-Mistral-7B

This is the FP16 PyTorch / HF version of chronos-mistral-7b finetuned on the Mistral v0.1 Base model.

PLEASE NOTE: This is an experimental model, and further iterations will likely be released

Only use this version for further quantization or if you would like to run in full precision, as long as you have the VRAM required.

This model is primarily focused on chat, roleplay, storywriting, with good reasoning and logic.

Chronos can generate very long outputs with coherent text, largely due to the human inputs it was trained on, and it supports context length up to 4096 tokens

Up to 16384 with RoPE with solid coherency.

This model uses Alpaca formatting, so for optimal model performance, use it to start the dialogue or story, and if you use a frontend like SillyTavern ENABLE instruction mode:

### Instruction:
{Your instruction or question here.}

### Response:

Not using the format will make the model perform significantly worse than intended unless it is merged.

Other Versions (Quantizations)

TBD

Support My Development of New Models Support Development

Downloads last month
16
Safetensors
Model size
7.24B params
Tensor type
BF16
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for elinas/chronos-mistral-7b

Quantizations
3 models

Collection including elinas/chronos-mistral-7b