nielsr HF staff commited on
Commit
4f5646a
·
verified ·
1 Parent(s): 4c313cb

Add pipeline tag, library name, link to paper

Browse files

This PR adds the `pipeline_tag`, `library_name` and links to paper and github repository for the TransPixar model.

Files changed (1) hide show
  1. README.md +134 -3
README.md CHANGED
@@ -1,3 +1,134 @@
1
- ---
2
- license: apache-2.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ```yaml
2
+ ---
3
+ license: apache-2.0
4
+ library_name: diffusers
5
+ pipeline_tag: text-to-video
6
+ ---
7
+
8
+ # TransPixar: Advancing Text-to-Video Generation with Transparency
9
+ <br>
10
+ [Paper](https://arxiv.org/abs/2501.03006)
11
+ [Project Page](https://wileewang.github.io/TransPixar)
12
+ [HuggingFace Demo](https://huggingface.co/spaces/wileewang/TransPixar)
13
+ <br>
14
+
15
+ This repository contains the model of the paper [TransPixar: Advancing Text-to-Video Generation with Transparency](https://huggingface.co/papers/2501.03006).
16
+
17
+ Code: https://github.com/wileewang/TransPixar
18
+
19
+ <!-- <a href='https://www.youtube.com/watch?v=Wq93zi8bE3U'><img src='https://img.shields.io/badge/Demo_Video-MotionDirector-red'></a> -->
20
+ <br>
21
+
22
+ [Luozhou Wang*](https://wileewang.github.io/),
23
+ [Yijun Li**](https://yijunmaverick.github.io/),
24
+ [Zhifei Chen](),
25
+ [Jui-Hsien Wang](http://juiwang.com/),
26
+ [Zhifei Zhang](https://zzutk.github.io/),
27
+ [He Zhang](https://sites.google.com/site/hezhangsprinter),
28
+ [Zhe Lin](https://sites.google.com/site/zhelin625/home),
29
+ [Yingcong Chen†](https://www.yingcong.me)
30
+
31
+ HKUST(GZ), HKUST, Adobe Research.
32
+
33
+ \* Intership Project.
34
+ \** Project Leader.
35
+ † Corresponding Author.
36
+
37
+ Text-to-video generative models have made significant strides, enabling diverse applications in entertainment, advertising, and education. However, generating RGBA video, which includes alpha channels for transparency, remains a challenge due to limited datasets and the difficulty of adapting existing models. Alpha channels are crucial for visual effects (VFX), allowing transparent elements like smoke and reflections to blend seamlessly into scenes.
38
+ We introduce TransPixar, a method to extend pretrained video models for RGBA generation while retaining the original RGB capabilities. TransPixar leverages a diffusion transformer (DiT) architecture, incorporating alpha-specific tokens and using LoRA-based fine-tuning to jointly generate RGB and alpha channels with high consistency. By optimizing attention mechanisms, TransPixar preserves the strengths of the original RGB model and achieves strong alignment between RGB and alpha channels despite limited training data.
39
+ Our approach effectively generates diverse and consistent RGBA videos, advancing the possibilities for VFX and interactive content creation.
40
+
41
+ <!-- insert a teaser gif -->
42
+ <!-- <img src="assets/mi.gif" width="640" /> -->
43
+
44
+
45
+
46
+ ## 📰 News
47
+ * **[2024.01.07]** We have released the project page, arXiv paper, inference code and huggingface demo for TransPixar + CogVideoX.
48
+
49
+ ## 🚧 Todo List
50
+ * [x] Release code, paper and demo.
51
+ * [x] Release checkpoints of joint generation (RGB + Alpha).
52
+ <!-- * [ ] Release checkpoints of more modalities (RGB + Depth).
53
+ * [ ] Release checkpoints of conditional generation (RGB->Alpha). -->
54
+
55
+
56
+ ## Contents
57
+
58
+ * [Installation](#installation)
59
+ * [TransPixar LoRA Hub](#lora-hub)
60
+ * [Training](#training)
61
+ * [Inference](#inference)
62
+ * [Acknowledgement](#acknowledgement)
63
+ * [Citation](#citation)
64
+
65
+ <!-- * [Motion Embeddings Hub](#motion-embeddings-hub) -->
66
+
67
+ ## Installation
68
+
69
+ ```bash
70
+ conda create -n TransPixar python=3.10
71
+ conda activate TransPixar
72
+ pip install -r requirements.txt
73
+ ```
74
+
75
+
76
+ ## TransPixar LoRA Hub
77
+
78
+ Our pipeline is designed to support various video tasks, including Text-to-RGBA Video, Image-to-RGBA Video.
79
+
80
+ We provide the following pre-trained LoRA weights for different tasks:
81
+
82
+ | Task | Base Model | Frames | LoRA weights
83
+ |------|-------------|--------|-----------------|
84
+ | T2V + RGBA | [genmo/mochi-1-preview](https://huggingface.co/genmo/mochi-1-preview) | 37 | Coming soon |
85
+ | T2V + RGBA | [THUDM/CogVideoX-5B](https://huggingface.co/THUDM/CogVideoX-5b) | 49 | [link](https://huggingface.co/wileewang/TransPixar/blob/main/cogvideox_rgba_lora.safetensors) |
86
+ | I2V + RGBA | [THUDM/CogVideoX-5b-I2V](https://huggingface.co/THUDM/CogVideoX-5b-I2V) | 49 | Coming soon |
87
+
88
+ ## Training - RGB + Alpha Joint Generation
89
+ We have open-sourced the training code for **Mochi** on RGBA joint generation. Please refer to the [Mochi README](Mochi/README.md) for details.
90
+
91
+
92
+ ## Inference - Gradio Demo
93
+ In addition to the [Hugging Face online demo](https://huggingface.co/spaces/wileewang/TransPixar), users can also launch a local inference demo based on CogVideoX-5B by running the following command:
94
+
95
+ ```bash
96
+ python app.py
97
+ ```
98
+
99
+ ## Inference - Command Line Interface (CLI)
100
+ To generate RGBA videos, navigate to the corresponding directory for the video model and execute the following command:
101
+ ```bash
102
+ python cli.py \
103
+ --lora_path /path/to/lora \
104
+ --prompt "..." \
105
+
106
+ ```
107
+
108
+
109
+
110
+ ## Acknowledgement
111
+
112
+ * [finetrainers](https://github.com/a-r-r-o-w/finetrainers): We followed their implementation of Mochi training and inference.
113
+ * [CogVideoX](https://github.com/THUDM/CogVideo): We followed their implementation of CogVideoX training and inference.
114
+
115
+ We are grateful for their exceptional work and generous contribution to the open-source community.
116
+
117
+ ## Citation
118
+
119
+ ```bibtex
120
+ @misc{wang2025transpixar,
121
+ title={TransPixar: Advancing Text-to-Video Generation with Transparency},
122
+ author={Luozhou Wang and Yijun Li and Zhifei Chen and Jui-Hsien Wang and Zhifei Zhang and He Zhang and Zhe Lin and Yingcong Chen},
123
+ year={2025},
124
+ eprint={2501.03006},
125
+ archivePrefix={arXiv},
126
+ primaryClass={cs.CV},
127
+ url={https://arxiv.org/abs/2501.03006},
128
+ }
129
+ ```
130
+
131
+ <!-- ## Star History
132
+
133
+ [![Star History Chart](https://api.star-history.com/svg?repos=hpcaitech/Open-Sora&type=Date)](https://star-history.com/#hpcaitech/Open-Sora&Date) -->
134
+ ```