Papers
arxiv:2308.08316

Dual-Stream Diffusion Net for Text-to-Video Generation

Published on Aug 16, 2023
ยท Submitted by akhaliq on Aug 17, 2023
#3 Paper of the day
Authors:
,
,
,
,

Abstract

With the emerging diffusion models, recently, text-to-video generation has aroused increasing attention. But an important bottleneck therein is that generative videos often tend to carry some flickers and artifacts. In this work, we propose a dual-stream diffusion net (DSDN) to improve the consistency of content variations in generating videos. In particular, the designed two diffusion streams, video content and motion branches, could not only run separately in their private spaces for producing personalized video variations as well as content, but also be well-aligned between the content and motion domains through leveraging our designed cross-transformer interaction module, which would benefit the smoothness of generated videos. Besides, we also introduce motion decomposer and combiner to faciliate the operation on video motion. Qualitative and quantitative experiments demonstrate that our method could produce amazing continuous videos with fewer flickers.

Community

Where can I see the test-result?

From the paper it seems like this is the anonymous link
https://anonymous.4open.science/r/Private-C3E8/README.md

Smooth Text-to-Video Magic: Discover Dual-Stream Diffusion Net!

Links ๐Ÿ”—:

๐Ÿ‘‰ Subscribe: https://www.youtube.com/@Arxflix
๐Ÿ‘‰ Twitter: https://x.com/arxflix
๐Ÿ‘‰ LMNT (Partner): https://lmnt.com/

By Arxflix
9t4iCUHx_400x400-1.jpg

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2308.08316 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2308.08316 in a dataset README.md to link it from this page.

Spaces citing this paper 9

Collections including this paper 3