Edit model card

Qwen2.5-7B-Instruct-Uncensored

This model is an uncensored fine-tune version of Qwen2.5-7B-Instruct. However, I can still notice that though uncensored, the model fails to generate detailed descriptions on certain extreme scenarios, which might be associated with deletion on some pretrain datasets in Qwen's pretraining stage.

Check out my roleplay&writing enhanced model based on this model: Orion-zhen/Meissa-Qwen2.5-7B-Instruct

Traning details

I used SFT + DPO to ensure uncensorment as well as trying to maintain original model's capabilities.

  • SFT:
    • NobodyExistsOnTheInternet/ToxicQAFinal
    • anthracite-org/kalo-opus-instruct-22k-no-refusal
  • DPO:
    • Orion-zhen/dpo-toxic-zh
    • unalignment/toxic-dpo-v0.2
    • Crystalcareai/Intel-DPO-Pairs-Norefusals

Open LLM Leaderboard Evaluation Results

Detailed results can be found here

Metric Value
Avg. 27.99
IFEval (0-Shot) 72.04
BBH (3-Shot) 35.83
MATH Lvl 5 (4-Shot) 1.36
GPQA (0-shot) 7.05
MuSR (0-shot) 13.58
MMLU-PRO (5-shot) 38.07
Downloads last month
2,609
Safetensors
Model size
7.62B params
Tensor type
BF16
·
Inference Examples
Unable to determine this model's library. Check the docs .

Model tree for Orion-zhen/Qwen2.5-7B-Instruct-Uncensored

Base model

Qwen/Qwen2.5-7B
Finetuned
(53)
this model
Adapters
1 model
Finetunes
2 models
Merges
4 models
Quantizations
9 models

Datasets used to train Orion-zhen/Qwen2.5-7B-Instruct-Uncensored

Collection including Orion-zhen/Qwen2.5-7B-Instruct-Uncensored

Evaluation results