Configuration Parsing Warning: In config.json: "quantization_config.bits" must be an integer

L3.3-70B-Lycosa

This is a merge of pre-trained language models created using mergekit.

Merge Details

An RP merge with a focus on:
- model intelligence
- removing positive bias
- creativity

This model was merged using the sce merge method using deepseek-r1-distill-llama-70b as a base.

Note: forcing the llama3.3 chat template can sometimes yield better results. The deepseek chat template is the default provided in config.

Models Merged

The following models were included in the merge:

  • deepseek-ai/DeepSeek-R1-Distill-Llama-70B
  • Sao10K/70B-L3.3-Cirrus-x1
  • TheDrummer/Nautilus-70B-v0.1
  • Doctor-Shotgun/L3.3-70B-Magnum-v4-SE
  • SicariusSicariiStuff/Negative_LLAMA_70B

Configuration

The following YAML configuration was used to produce this model:

models:
  # Pivot model
  - model: llama-3.3-70b-instruct
  # Target models
  - model: Sao10K/70B-L3.3-Cirrus-x1
  - model: TheDrummer/Nautilus-70B-v0.1
  - model: Doctor-Shotgun/L3.3-70B-Magnum-v4-SE
  - model: SicariusSicariiStuff/Negative_LLAMA_70B
merge_method: sce
base_model: deepseek-ai/DeepSeek-R1-Distill-Llama-70B
parameters:
  select_topk: 1.0
dtype: bfloat16
Downloads last month
200
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and the model is not deployed on the HF Inference API.

Model tree for VoidStare/L3.3-70B-Lycosa-v0.1-EXL2-6.5bpw-h8

Quantized
(48)
this model