File size: 5,507 Bytes
4f5646a |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 |
```yaml
---
license: apache-2.0
library_name: diffusers
pipeline_tag: text-to-video
---
# TransPixar: Advancing Text-to-Video Generation with Transparency
<br>
[Paper](https://arxiv.org/abs/2501.03006)
[Project Page](https://wileewang.github.io/TransPixar)
[HuggingFace Demo](https://huggingface.co./spaces/wileewang/TransPixar)
<br>
This repository contains the model of the paper [TransPixar: Advancing Text-to-Video Generation with Transparency](https://huggingface.co./papers/2501.03006).
Code: https://github.com/wileewang/TransPixar
<!-- <a href='https://www.youtube.com/watch?v=Wq93zi8bE3U'><img src='https://img.shields.io/badge/Demo_Video-MotionDirector-red'></a> -->
<br>
[Luozhou Wang*](https://wileewang.github.io/),
[Yijun Li**](https://yijunmaverick.github.io/),
[Zhifei Chen](),
[Jui-Hsien Wang](http://juiwang.com/),
[Zhifei Zhang](https://zzutk.github.io/),
[He Zhang](https://sites.google.com/site/hezhangsprinter),
[Zhe Lin](https://sites.google.com/site/zhelin625/home),
[Yingcong Chen†](https://www.yingcong.me)
HKUST(GZ), HKUST, Adobe Research.
\* Intership Project.
\** Project Leader.
† Corresponding Author.
Text-to-video generative models have made significant strides, enabling diverse applications in entertainment, advertising, and education. However, generating RGBA video, which includes alpha channels for transparency, remains a challenge due to limited datasets and the difficulty of adapting existing models. Alpha channels are crucial for visual effects (VFX), allowing transparent elements like smoke and reflections to blend seamlessly into scenes.
We introduce TransPixar, a method to extend pretrained video models for RGBA generation while retaining the original RGB capabilities. TransPixar leverages a diffusion transformer (DiT) architecture, incorporating alpha-specific tokens and using LoRA-based fine-tuning to jointly generate RGB and alpha channels with high consistency. By optimizing attention mechanisms, TransPixar preserves the strengths of the original RGB model and achieves strong alignment between RGB and alpha channels despite limited training data.
Our approach effectively generates diverse and consistent RGBA videos, advancing the possibilities for VFX and interactive content creation.
<!-- insert a teaser gif -->
<!-- <img src="assets/mi.gif" width="640" /> -->
## 📰 News
* **[2024.01.07]** We have released the project page, arXiv paper, inference code and huggingface demo for TransPixar + CogVideoX.
## 🚧 Todo List
* [x] Release code, paper and demo.
* [x] Release checkpoints of joint generation (RGB + Alpha).
<!-- * [ ] Release checkpoints of more modalities (RGB + Depth).
* [ ] Release checkpoints of conditional generation (RGB->Alpha). -->
## Contents
* [Installation](#installation)
* [TransPixar LoRA Hub](#lora-hub)
* [Training](#training)
* [Inference](#inference)
* [Acknowledgement](#acknowledgement)
* [Citation](#citation)
<!-- * [Motion Embeddings Hub](#motion-embeddings-hub) -->
## Installation
```bash
conda create -n TransPixar python=3.10
conda activate TransPixar
pip install -r requirements.txt
```
## TransPixar LoRA Hub
Our pipeline is designed to support various video tasks, including Text-to-RGBA Video, Image-to-RGBA Video.
We provide the following pre-trained LoRA weights for different tasks:
| Task | Base Model | Frames | LoRA weights
|------|-------------|--------|-----------------|
| T2V + RGBA | [genmo/mochi-1-preview](https://huggingface.co./genmo/mochi-1-preview) | 37 | Coming soon |
| T2V + RGBA | [THUDM/CogVideoX-5B](https://huggingface.co./THUDM/CogVideoX-5b) | 49 | [link](https://huggingface.co./wileewang/TransPixar/blob/main/cogvideox_rgba_lora.safetensors) |
| I2V + RGBA | [THUDM/CogVideoX-5b-I2V](https://huggingface.co./THUDM/CogVideoX-5b-I2V) | 49 | Coming soon |
## Training - RGB + Alpha Joint Generation
We have open-sourced the training code for **Mochi** on RGBA joint generation. Please refer to the [Mochi README](Mochi/README.md) for details.
## Inference - Gradio Demo
In addition to the [Hugging Face online demo](https://huggingface.co./spaces/wileewang/TransPixar), users can also launch a local inference demo based on CogVideoX-5B by running the following command:
```bash
python app.py
```
## Inference - Command Line Interface (CLI)
To generate RGBA videos, navigate to the corresponding directory for the video model and execute the following command:
```bash
python cli.py \
--lora_path /path/to/lora \
--prompt "..." \
```
## Acknowledgement
* [finetrainers](https://github.com/a-r-r-o-w/finetrainers): We followed their implementation of Mochi training and inference.
* [CogVideoX](https://github.com/THUDM/CogVideo): We followed their implementation of CogVideoX training and inference.
We are grateful for their exceptional work and generous contribution to the open-source community.
## Citation
```bibtex
@misc{wang2025transpixar,
title={TransPixar: Advancing Text-to-Video Generation with Transparency},
author={Luozhou Wang and Yijun Li and Zhifei Chen and Jui-Hsien Wang and Zhifei Zhang and He Zhang and Zhe Lin and Yingcong Chen},
year={2025},
eprint={2501.03006},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2501.03006},
}
```
<!-- ## Star History
[![Star History Chart](https://api.star-history.com/svg?repos=hpcaitech/Open-Sora&type=Date)](https://star-history.com/#hpcaitech/Open-Sora&Date) -->
``` |