What is it?

This model is intended to be multifarious in its capabilities and should be quite capable at both co-writing and roleplay as well as find itself quite at home performing sentiment analysis or summarization as part of a pipeline. It has been trained on a wide array of one shot instructions, multi turn instructions, role playing scenarios, text adventure games, co-writing, and much more. The full dataset is publicly available and can be found in the datasets section of the model page.

There has not been any form of harmfulness alignment done on this model, please take the appropriate precautions when using it in a production environment.

Prompting

The model has been trained on standard "ChatML" format prompting, an example of which is shown below:

<|im_start|>system
system prompt<|im_end|>
<|im_start|>user
Hi there!<|im_end|>
<|im_start|>assistant
Nice to meet you!<|im_end|>
<|im_start|>user
Can I ask a question?<|im_end|>
<|im_start|>assistant

SillyTavern templates

Below are Instruct and Context templates for use within SillyTavern.

context template
{
    "story_string": "<|im_start|>system\n{{#if system}}{{system}}\n{{/if}}{{#if wiBefore}}{{wiBefore}}\n{{/if}}{{#if description}}{{description}}\n{{/if}}{{#if personality}}{{char}}'s personality: {{personality}}\n{{/if}}{{#if scenario}}Scenario: {{scenario}}\n{{/if}}{{#if wiAfter}}{{wiAfter}}\n{{/if}}{{#if persona}}{{persona}}\n{{/if}}{{trim}}<|im_end|>\n",
    "example_separator": "",
    "chat_start": "",
    "use_stop_strings": false,
    "allow_jailbreak": false,
    "always_force_name2": false,
    "trim_sentences": false,
    "include_newline": false,
    "single_line": false,
    "name": "Dan-ChatML"
}

instruct template
{
    "system_prompt": "Write {{char}}'s actions and dialogue, user will write {{user}}'s.",
    "input_sequence": "<|im_start|>user\n",
    "output_sequence": "<|im_start|>assistant\n",
    "first_output_sequence": "",
    "last_output_sequence": "",
    "system_sequence_prefix": "",
    "system_sequence_suffix": "",
    "stop_sequence": "<|im_end|>",
    "wrap": false,
    "macro": true,
    "names": false,
    "names_force_groups": false,
    "activation_regex": "",
    "skip_examples": false,
    "output_suffix": "<|im_end|>\n",
    "input_suffix": "<|im_end|>\n",
    "system_sequence": "<|im_start|>system\n",
    "system_suffix": "<|im_end|>\n",
    "user_alignment_message": "",
    "last_system_sequence": "",
    "system_same_as_user": false,
    "first_input_sequence": "",
    "last_input_sequence": "",
    "name": "Dan-ChatML"
}

Training

This model was full finetuned for 4 epochs on 8x H100 equating to 21 hours.

Built with Axolotl

Downloads last month
103
Safetensors
Model size
8.03B params
Tensor type
BF16
·
Inference Examples
Unable to determine this model's library. Check the docs .

Model tree for PocketDoc/Dans-PersonalityEngine-v1.0.0-8b

Finetuned
(1)
this model
Quantizations
11 models

Datasets used to train PocketDoc/Dans-PersonalityEngine-v1.0.0-8b

Spaces using PocketDoc/Dans-PersonalityEngine-v1.0.0-8b 2