Edit model card

KobbleSmall-2B-GGUF

This is the GGUF quantization of the KobbleSmall-2B model.

You can obtain the unquantized model here: https://huggingface.co./concedo/KobbleSmall-2B

Dataset and Objectives

The Kobble Dataset is a semi-private aggregated dataset made from multiple online sources and web scrapes. It contains content chosen and formatted specifically to work with KoboldAI software and Kobold Lite.

Dataset Categories:

  • Instruct: Single turn instruct examples presented in the Alpaca format, with an emphasis on uncensored and unrestricted responses.
  • Chat: Two participant roleplay conversation logs in a multi-turn raw chat format that KoboldAI uses.
  • Story: Unstructured fiction excerpts, including literature containing various erotic and provocative content.

Prompt template: Alpaca

### Instruction:
{prompt}

### Response:

Note: No assurances will be provided about the origins, safety, or copyright status of this model, or of any content within the Kobble dataset.
If you belong to a country or organization that has strict AI laws or restrictions against unlabelled or unrestricted content, you are advised not to use this model.

Downloads last month
34
GGUF
Model size
2.61B params
Architecture
gemma2

2-bit

4-bit

6-bit

8-bit

16-bit

Inference API
Unable to determine this model's library. Check the docs .