Diffusers documentation

AutoencoderKLWan

You are viewing main version, which requires installation from source. If you'd like regular pip install, checkout the latest stable version (v0.32.2).
Hugging Face's logo
Join the Hugging Face community

and get access to the augmented documentation experience

to get started

AutoencoderKLWan

The 3D variational autoencoder (VAE) model with KL loss used in Wan 2.1 by the Alibaba Wan Team.

The model can be loaded with the following code snippet.

from diffusers import AutoencoderKLWan

vae = AutoencoderKLWan.from_pretrained("Wan-AI/Wan2.1-T2V-1.3B-Diffusers", subfolder="vae", torch_dtype=torch.float32)

AutoencoderKLWan

class diffusers.AutoencoderKLWan

< >

( base_dim: int = 96 z_dim: int = 16 dim_mult: typing.Tuple[int] = [1, 2, 4, 4] num_res_blocks: int = 2 attn_scales: typing.List[float] = [] temperal_downsample: typing.List[bool] = [False, True, True] dropout: float = 0.0 latents_mean: typing.List[float] = [-0.7571, -0.7089, -0.9113, 0.1075, -0.1745, 0.9653, -0.1517, 1.5508, 0.4134, -0.0715, 0.5517, -0.3632, -0.1922, -0.9497, 0.2503, -0.2921] latents_std: typing.List[float] = [2.8184, 1.4541, 2.3275, 2.6558, 1.2196, 1.7708, 2.6052, 2.0743, 3.2687, 2.1526, 2.8652, 1.5579, 1.6382, 1.1253, 2.8251, 1.916] )

A VAE model with KL loss for encoding videos into latents and decoding latent representations into videos. Introduced in [Wan 2.1].

This model inherits from ModelMixin. Check the superclass documentation for it’s generic methods implemented for all models (such as downloading or saving).

wrapper

< >

( *args **kwargs )

forward

< >

( sample: Tensor sample_posterior: bool = False return_dict: bool = True generator: typing.Optional[torch._C.Generator] = None )

Parameters

  • sample (torch.Tensor) — Input sample.
  • return_dict (bool, optional, defaults to True) — Whether or not to return a DecoderOutput instead of a plain tuple.

DecoderOutput

class diffusers.models.autoencoders.vae.DecoderOutput

< >

( sample: Tensor commit_loss: typing.Optional[torch.FloatTensor] = None )

Parameters

  • sample (torch.Tensor of shape (batch_size, num_channels, height, width)) — The decoded output sample from the last layer of the model.

Output of decoding method.

< > Update on GitHub