⚡ExLlamaV2 quant of : Cakrawala-Llama-3.1-8B

➡️ Exl2 version : 0.2.6
➡️ Cal. dataset : Default.
📄 Measurement.json file.

🎭 Cakrawala-Llama-3.1-8B

Where Worlds Converge and Adventures Begin!

🌟 What's Special About This Model?

Cakrawala-Llama-3.1-8B is a fine-tuned variant of the Llama-3.1-8B-Instruct model, specifically optimised for generating rich roleplaying conversations and character interactions. The model has been trained to excel at producing detailed, contextually appropriate character dialogues with rich descriptions of physical actions, expressions, and emotional states while maintaining consistent character voices and perspectives throughout extended interactions.

🧪 The Secret Sauce

Training Diet:

  • Fed with 13,000 conversation pairs
  • Each conversation is a minimum 12-13 turns long
  • Focused heavily details like facial expressions, environmental descriptions, and character reactions that are focused a lot on keeping the model in character.

Tech Wizardry:

  • Trained on Llama-3.1-8B-Instruct
  • Fine-tuned using QLoRA
  • Trained over 2 epochs

Training Parameters

  • Gradient Accumulation Steps: 1
  • Micro Batch Size: 4
  • Learning Rate: 0.0002
  • Optimizer: AdamW
  • Scheduler: Cosine
  • Mixed Precision: BF16 & FP16 with TF32 support

🔧 Under the Hood

  • Trained on 8 x H100 NVL GPUs

🎬 License & Credits

  • Licensed under MIT
  • Based on meta-llama/Llama-3.1-8B-Instruct

GGUF Quants

Built with ❤️ for roleplayers, by roleplayers

Downloads last month
25
Inference Examples
Unable to determine this model's library. Check the docs .

Model tree for Meggido/Cakrawala-Llama-3.1-8B-6.5bpw-h8-exl2

Finetuned
(587)
this model

Dataset used to train Meggido/Cakrawala-Llama-3.1-8B-6.5bpw-h8-exl2

Collection including Meggido/Cakrawala-Llama-3.1-8B-6.5bpw-h8-exl2