Autoregressive Video Generation without Vector Quantization
Abstract
This paper presents a novel approach that enables autoregressive video generation with high efficiency. We propose to reformulate the video generation problem as a non-quantized autoregressive modeling of temporal frame-by-frame prediction and spatial set-by-set prediction. Unlike raster-scan prediction in prior autoregressive models or joint distribution modeling of fixed-length tokens in diffusion models, our approach maintains the causal property of GPT-style models for flexible in-context capabilities, while leveraging bidirectional modeling within individual frames for efficiency. With the proposed approach, we train a novel video autoregressive model without vector quantization, termed NOVA. Our results demonstrate that NOVA surpasses prior autoregressive video models in data efficiency, inference speed, visual fidelity, and video fluency, even with a much smaller model capacity, i.e., 0.6B parameters. NOVA also outperforms state-of-the-art image diffusion models in text-to-image generation tasks, with a significantly lower training cost. Additionally, NOVA generalizes well across extended video durations and enables diverse zero-shot applications in one unified model. Code and models are publicly available at https://github.com/baaivision/NOVA.
Community
We present NOVA (NOn-Quantized Video Autoregressive Model), a model that enables autoregressive image/video generation with high efficiency.
- 🔥 Novel Approach: Non-quantized video autoregressive generation.
- 🔥 State-of-the-art Performance: High efficiency with state-of-the-art t2i/t2v results.
- 🔥 Unified Modeling: Multi-task capabilities in a single unified model.
Paper link:https://arxiv.org/abs/2412.14169
Code available at: https://github.com/baaivision/NOVA
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- Efficient Generative Modeling with Residual Vector Quantization-Based Tokens (2024)
- M-VAR: Decoupled Scale-wise Autoregressive Modeling for High-Quality Image Generation (2024)
- Continuous Speculative Decoding for Autoregressive Image Generation (2024)
- REDUCIO! Generating 1024$\times$1024 Video within 16 Seconds using Extremely Compressed Motion Latents (2024)
- Taming Scalable Visual Tokenizer for Autoregressive Image Generation (2024)
- Causal Diffusion Transformers for Generative Modeling (2024)
- Infinity: Scaling Bitwise AutoRegressive Modeling for High-Resolution Image Synthesis (2024)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 5
Browse 5 models citing this paperDatasets citing this paper 0
No dataset linking this paper