GPTQ

gptq quants for ludis/tsukasa-limarp-7b download the original model except for the .bin files (or download everything and delete the .bin files) then move the contents from whichever quants folder you want to use into the original model folder and run with autogptq

Prompting

https://rentry.org/v43eo - reccomended prompts and gen settings

The current model version has been trained on prompts using three different roles, which are denoted by the following tokens: <|system|>, <|user|> and <|model|>.

The <|system|> prompt can be used to inject out-of-channel information behind the scenes, while the <|user|> prompt should be used to indicate user input. The <|model|> token should then be used to indicate that the model should generate a response. These tokens can happen multiple times and be chained up to form a conversation history.

Training

base model (llama-2-7b-hf)

tuned on commit de693ac of the koishi dataset for 1 epoch as apart of ludis/tsukasa-7b

then tuned on commit 36fc235 of pippa metharme for 1 epoch as apart of ludis/tsukasa-7b

then tuned on Version 2023-09-03 of LimaRP (without ponyville, lolicit, all the fallen, and eka's portal subsets) for 2 epochs

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference API
Unable to determine this model's library. Check the docs .

Datasets used to train ludis/tsukasa-limarp-7b-gptq

Collection including ludis/tsukasa-limarp-7b-gptq