Text-to-Image
Diffusers
English
SVDQuant
FLUX.1-dev
INT4
FLUX.1
Diffusion
Quantization

logo

Quantization Library: DeepCompressor   Inference Engine: Nunchaku

teaser SVDQuant is a post-training quantization technique for 4-bit weights and activations that well maintains visual fidelity. On 12B FLUX.1-dev, it achieves 3.6× memory reduction compared to the BF16 model. By eliminating CPU offloading, it offers 8.7× speedup over the 16-bit model when on a 16GB laptop 4090 GPU, 3× faster than the NF4 W4A16 baseline. On PixArt-∑, it demonstrates significantly superior visual quality over other W4A4 or even W4A8 baselines. "E2E" means the end-to-end latency including the text encoder and VAE decoder.

Method

Quantization Method -- SVDQuant

intuition Overview of SVDQuant. Stage1: Originally, both the activation X and weights W contain outliers, making 4-bit quantization challenging. Stage 2: We migrate the outliers from activations to weights, resulting in the updated activation and weight. While the activation becomes easier to quantize, the weight now becomes more difficult. Stage 3: SVDQuant further decomposes the weight into a low-rank component and a residual with SVD. Thus, the quantization difficulty is alleviated by the low-rank branch, which runs at 16-bit precision.

Nunchaku Engine Design

engine (a) Naïvely running low-rank branch with rank 32 will introduce 57% latency overhead due to extra read of 16-bit inputs in Down Projection and extra write of 16-bit outputs in Up Projection. Nunchaku optimizes this overhead with kernel fusion. (b) Down Projection and Quantize kernels use the same input, while Up Projection and 4-Bit Compute kernels share the same output. To reduce data movement overhead, we fuse the first two and the latter two kernels together.

Model Description

  • Developed by: MIT, NVIDIA, CMU, Princeton, UC Berkeley, SJTU and Pika Labs
  • Model type: INT W4A4 model
  • Model size: 6.64GB
  • Model resolution: The number of pixels need to be a multiple of 65,536.
  • License: Apache-2.0

Usage

Diffusers

Please follow the instructions in mit-han-lab/nunchaku to set up the environment. Then you can run the model with

import torch

from nunchaku.pipelines import flux as nunchaku_flux

pipeline = nunchaku_flux.from_pretrained(
    "black-forest-labs/FLUX.1-dev",
    torch_dtype=torch.bfloat16,
    qmodel_path="mit-han-lab/svdq-int4-flux.1-dev",  # download from Huggingface
).to("cuda")
image = pipeline("A cat holding a sign that says hello world", num_inference_steps=50, guidance_scale=3.5).images[0]
image.save("example.png")

Comfy UI

Work in progress.

Limitations

  • The model is only runnable on NVIDIA GPUs with architectures sm_86 (Ampere: RTX 3090, A6000), sm_89 (Ada: RTX 4090), and sm_80 (A100). See this issue for more details.
  • You may observe some slight differences from the BF16 models in details.

Citation

If you find this model useful or relevant to your research, please cite

@article{
  li2024svdquant,
  title={SVDQuant: Absorbing Outliers by Low-Rank Components for 4-Bit Diffusion Models},
  author={Li*, Muyang and Lin*, Yujun and Zhang*, Zhekai and Cai, Tianle and Li, Xiuyu and Guo, Junxian and Xie, Enze and Meng, Chenlin and Zhu, Jun-Yan and Han, Song},
  journal={arXiv preprint arXiv:2411.05007},
  year={2024}
}
Downloads last month
1,826
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for mit-han-lab/svdq-int4-flux.1-dev

Finetuned
(281)
this model

Dataset used to train mit-han-lab/svdq-int4-flux.1-dev

Collection including mit-han-lab/svdq-int4-flux.1-dev