Configuration Parsing Warning: In config.json: "quantization_config.bits" must be an integer

L3.3-Damascus-R1

Model banner
Created by SteelSkull

Model Information

L3.3-Damascus-R1

L3.3 = Llama 3.3 SCE Merge R1 = Deepseek R1 70b Parameters

Model Composition

Damascus-R1 builds upon some elements of the Nevoria foundation but represents a significant step forward with a completely custom-made DeepSeek R1 Distill base: Hydroblated-R1-V3. Constructed using the new SCE (Select, Calculate, and Erase) merge method, Damascus-R1 prioritizes stability, intelligence, and enhanced awareness.

Technical Architecture

Leveraging the SCE merge method and custom base, Damascus-R1 integrates newly added specialized components from multiple high-performance models:

  • EVA and EURYALE foundations for creative expression and scene comprehension
  • Cirrus and Hanami elements for enhanced reasoning capabilities
  • Anubis components for detailed scene description
  • Negative_LLAMA integration for balanced perspective and response

Core Philosophy

Damascus-R1 embodies the principle that AI models can be intelligent and be fun. This version specifically addresses recent community feedback and iterates on prior experiments, optimizing the balance between technical capability and natural conversation flow.

Base Architecture

At its core, Damascus-R1 utilizes the entirely custom Hydroblated-R1 base model, specifically engineered for stability, enhanced reasoning, and performance. The SCE merge method, with settings finely tuned based on community feedback from evaluations of Experiment-Model-Ver-A, L3.3-Exp-Nevoria-R1-70b-v0.1 and L3.3-Exp-Nevoria-70b-v0.1, enables precise and effective component integration while maintaining model coherence and reliability.

Recommended Sampler Settings: By @Geechan

Dynamic Temperature

Min 1.0
Max 1.3-1.35
Exponent 1.0

Static Temperature:

1.2

Min P

0.02

DRY Settings

Multiplier 0.8
Base 1.75
Length 4

Recommended Templates & Prompts

LLam@ception by @.konnect
LeCeption by @Steel > XML version of Llam@ception 1.5.2 with stepped thinking added

Support & Community:

Special Thanks

  • @Geechan for feedback and sampler settings
  • @Konnect for their feedback and templates
  • @Kistara for their feedback and help with the model mascot design
  • @Thana Alt for their feedback and Quants
  • @Lightning_missile for their feedback
  • @Yemosvoto for the model name
  • The Arli community for feedback and testers
  • The BeaverAI communty for feedback and testers

I wish I could add everyone but im pretty sure it would be as long as the card!

Downloads last month
12
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and the model is not deployed on the HF Inference API.

Model tree for ReadyArt/L3.3-Damascus-R1_EXl2_6.65bpw_H8

Quantized
(12)
this model

Collection including ReadyArt/L3.3-Damascus-R1_EXl2_6.65bpw_H8