AnimateLCM-I2V for Fast Image-conditioned Video Generation in 4 steps.

AnimateLCM-I2V is a latent image-to-video consistency model finetuned with AnimateLCM following the strategy proposed in AnimateLCM-paper without requiring teacher models.

AnimateLCM: Computation-Efficient Personalized Style Video Generation without Personalized Video Data by Fu-Yun Wang et al.

Example-Video

image/png

For more details, please refer to our [paper] | [code] | [proj-page] | [civitai].

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and HF Inference API was unable to determine this model's library.

Spaces using wangfuyun/AnimateLCM-I2V 6