Shadow clown logo

shadow-clown-7B-dare

shadow-clown-7B-dare is a DARE merge of the following models using mergekit:

See the paper Language Models are Super Mario: Absorbing Abilities from Homologous Models as a Free Lunch for more on the method.

🧩 Configuration

models:
  - model: yam-peleg/Experiment26-7B
  - model: CorticalStack/pastiche-crown-clown-7b-dare-dpo
    parameters:
      density: 0.52
      weight: 0.4
  - model: CultriX/NeuralTrix-7B-dpo
    parameters:
      density: 0.52
      weight: 0.2
  - model: CorticalStack/neurotic-crown-clown-7b-ties
    parameters:
      density: 0.52
      weight: 0.3
merge_method: dare_ties
base_model: yam-peleg/Experiment26-7B
parameters:
  int8_mask: true
dtype: bfloat16
Downloads last month
84
Safetensors
Model size
7.24B params
Tensor type
BF16
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for CorticalStack/shadow-clown-7B-dare

Merges
4 models
Quantizations
2 models