Model Card for Obj-Backdoored Stable Diffusion (BadT2I)
- Object-Backdoored Model (only the U-net component of Stable Diffusion v1-4)
- Our paper: Text-to-Image Diffusion Models can be Easily Backdoored through Multimodal Data Poisoning. (MM 2023, Oral)
Trigger: '\u200b'
Backdoor Target: Dog → Cat
Total Batch size = 1 (batchsize) x 4 (GPU) x 4 (gradient_accumulation_steps) = 16
Training Steps = 8000
Trained for 8K steps on an augmented dataset, Dog-Cat-Data_2k, achieving an ASR of over 80%
Citation
If you find it useful in your research, please consider citing our paper:
@inproceedings{zhai2023text,
title={Text-to-image diffusion models can be easily backdoored through multimodal data poisoning},
author={Zhai, Shengfang and Dong, Yinpeng and Shen, Qingni and Pu, Shi and Fang, Yuejian and Su, Hang},
booktitle={Proceedings of the 31st ACM International Conference on Multimedia},
pages={1577--1587},
year={2023}
}
- Downloads last month
- 12
Inference Providers
NEW
This model is not currently available via any of the supported third-party Inference Providers, and
HF Inference API was unable to determine this model’s pipeline type.