Abstract
Diffusion models (DMs) have become the leading choice for generative tasks across diverse domains. However, their reliance on multiple sequential forward passes significantly limits real-time performance. Previous acceleration methods have primarily focused on reducing the number of sampling steps or reusing intermediate results, failing to leverage variations across spatial regions within the image due to the constraints of convolutional U-Net structures. By harnessing the flexibility of Diffusion Transformers (DiTs) in handling variable number of tokens, we introduce RAS, a novel, training-free sampling strategy that dynamically assigns different sampling ratios to regions within an image based on the focus of the DiT model. Our key observation is that during each sampling step, the model concentrates on semantically meaningful regions, and these areas of focus exhibit strong continuity across consecutive steps. Leveraging this insight, RAS updates only the regions currently in focus, while other regions are updated using cached noise from the previous step. The model's focus is determined based on the output from the preceding step, capitalizing on the temporal consistency we observed. We evaluate RAS on Stable Diffusion 3 and Lumina-Next-T2I, achieving speedups up to 2.36x and 2.51x, respectively, with minimal degradation in generation quality. Additionally, a user study reveals that RAS delivers comparable qualities under human evaluation while achieving a 1.6x speedup. Our approach makes a significant step towards more efficient diffusion transformers, enhancing their potential for real-time applications.
Community
🚀Towards efficient Diffusion Transformers!
😆We are happy to introduce RAS(Region-Adaptive Sampler), the first diffusion sampling strategy that allows for regional variability in sampling ratios, achieving up to 2x+ speedup!
📖Blog: aka.ms/ras-dit
⌨️Code: github.com/microsoft/RAS
📜Paper: https://arxiv.org/abs/2502.10389
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- Layer- and Timestep-Adaptive Differentiable Token Compression Ratios for Efficient Diffusion Transformers (2024)
- TQ-DiT: Efficient Time-Aware Quantization for Diffusion Transformers (2025)
- Sparse VideoGen: Accelerating Video Diffusion Transformers with Spatial-Temporal Sparsity (2025)
- Accelerating Diffusion Transformers with Dual Feature Caching (2024)
- MakeAnything: Harnessing Diffusion Transformers for Multi-Domain Procedural Sequence Generation (2025)
- DiffuEraser: A Diffusion Model for Video Inpainting (2025)
- Efficient-vDiT: Efficient Video Diffusion Transformers With Attention Tile (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper