Fast3R: Towards 3D Reconstruction of 1000+ Images in One Forward Pass
Abstract
Multi-view 3D reconstruction remains a core challenge in computer vision, particularly in applications requiring accurate and scalable representations across diverse perspectives. Current leading methods such as DUSt3R employ a fundamentally pairwise approach, processing images in pairs and necessitating costly global alignment procedures to reconstruct from multiple views. In this work, we propose Fast 3D Reconstruction (Fast3R), a novel multi-view generalization to DUSt3R that achieves efficient and scalable 3D reconstruction by processing many views in parallel. Fast3R's Transformer-based architecture forwards N images in a single forward pass, bypassing the need for iterative alignment. Through extensive experiments on camera pose estimation and 3D reconstruction, Fast3R demonstrates state-of-the-art performance, with significant improvements in inference speed and reduced error accumulation. These results establish Fast3R as a robust alternative for multi-view applications, offering enhanced scalability without compromising reconstruction accuracy.
Community
Fast3R: Towards 3D Reconstruction of 1000+ Images in One Forward Pass
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- Align3R: Aligned Monocular Depth Estimation for Dynamic Videos (2024)
- SelfSplat: Pose-Free and 3D Prior-Free Generalizable 3D Gaussian Splatting (2024)
- MV-DUSt3R+: Single-Stage Scene Reconstruction from Sparse Views In 2 Seconds (2024)
- World-consistent Video Diffusion with Explicit 3D Modeling (2024)
- NVComposer: Boosting Generative Novel View Synthesis with Multiple Sparse and Unposed Images (2024)
- Wonderland: Navigating 3D Scenes from a Single Image (2024)
- Mutli-View 3D Reconstruction using Knowledge Distillation (2024)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper