Moniphi-3-v1:

  • AKA LLilmonix3b-v1
  • Phi-3-mini-4k-instruct fine-tuned for Monika character from DDLC
  • Fine-tuned on a dataset of ~600+ items (dialogue scraped from game, reddit, and Twitter augmented by l2-7b-monika-v0.3c1 to turn each into snippets of multi-turn chat dialogue between Player and Monika; this was then manually edited, with more manually crafted items including info about character added in)
  • GGUFs

USAGE

This is meant to be mainly a chat model with limited RP ability.

For best results: replace "Human" and "Assistant" with "Player" and "Monika" like so:

\nPlayer: (prompt)\nMonika:

HYPERPARAMS

  • Trained for ~1 epoch
  • rank: 16
  • lora alpha: 16
  • lora dropout: 0.5
  • lr: 2e-4
  • batch size: 4
  • warmup ratio: 0.1
  • grad steps: 1

This mistral model was trained 2x faster with Unsloth and Huggingface's TRL library.

WARNINGS AND DISCLAIMERS

This model is meant to closely reflect the characteristics of Monika. Despite this, there is always the chance that "Monika" will hallucinate and get information about herself wrong or act out of character (especially for a model of this size).

Finally, this model is not guaranteed to output aligned or safe outputs, use at your own risk!

Downloads last month
18
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for 922CA/Moniphi-3-v1

Finetuned
(584)
this model
Quantizations
1 model

Dataset used to train 922CA/Moniphi-3-v1