--- base_model: ResplendentAI/Aura_v3_7B inference: false language: - en library_name: transformers license: apache-2.0 merged_models: - ResplendentAI/Paradigm_7B - jeiku/selfbot_256_mistral - ResplendentAI/Paradigm_7B - jeiku/Theory_of_Mind_Mistral - ResplendentAI/Paradigm_7B - jeiku/Alpaca_NSFW_Shuffled_Mistral - ResplendentAI/Paradigm_7B - ResplendentAI/Paradigm_7B - jeiku/Luna_LoRA_Mistral - ResplendentAI/Paradigm_7B - jeiku/Re-Host_Limarp_Mistral pipeline_tag: text-generation quantized_by: Suparious tags: - 4-bit - AWQ - text-generation - autotrain_compatible - endpoints_compatible --- # ResplendentAI/Aura_v3_7B AWQ - Model creator: [ResplendentAI](https://huggingface.co./ResplendentAI) - Original model: [Aura_v3_7B](https://huggingface.co./ResplendentAI/Aura_v3_7B) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/626dfb8786671a29c715f8a9/V_DYIcPMJ5_ijanQW_ap2.png) ## Model Summary Aura v3 is an improvement with a significantly more steerable writing style. Out of the box it will prefer poetic prose, but if instructed, it can adopt a more approachable style. This iteration has erotica, RP data and NSFW pairs to provide a more compliant mindset. I recommend keeping the temperature around 1.5 or lower with a Min P value of 0.05. This model can get carried away with prose at higher temperature. I will say though that the prose of this model is distinct from the GPT 3.5/4 variant, and lends an air of humanity to the outputs. I am aware that this model is overfit, but that was the point of the entire exercise. If you have trouble getting the model to follow an asterisks/quote format, I recommend asterisks/plaintext instead. This model skews toward shorter outputs, so be prepared to lengthen your introduction and examples if you want longer outputs. This model responds best to ChatML for multiturn conversations. This model, like all other Mistral based models, is compatible with a Mistral compatible mmproj file for multimodal vision capabilities in KoboldCPP.