Edit model card

General

Bluemoon roleplay finetune of LLaMA 33B (2 roleplayers only). This release also tests a longer 4k context token size achieved with AliBi.

Models

GGML 4-bit for llama.cpp

  1. ggml-bluemoonrp-30b-4k-epoch6-q5_0.bin

GPTQ 4-bit CUDA:

  1. bluemoonrp-30b-4k-epoch6-4bit-128g.safetensors

Remarks

This model has been trained using the following prompt (Vicuna 1.1 format):

A transcript of a roleplay between two players, LEAD and ASSOCIATE. LEAD sets up a scenario and the characters, from which ASSOCIATE then assumes a character role and continues the story for that role in response to description given by LEAD. The story and characters are developed by exchange of detailed event descriptions and character dialogs, successively given by both LEAD and ASSOCIATE.
LEAD: [role1 message]
ASSOCIATE: [role2 message]</s>
Downloads last month
1,475
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Dataset used to train reeducator/bluemoonrp-30b

Space using reeducator/bluemoonrp-30b 1