Beyond Inpainting: Unleash 3D Understanding for Precise Camera-Controlled Video Generation


BNRist, Department of Computer Science and Technology, Tsinghua University

Abstract

Camera control has been extensively studied in conditioned video generation; however, performing precisely altering the camera trajectories while faithfully preserving the video content remains a challenging task. The mainstream approach to achieving precise camera control is warping a 3D representation according to the target trajectory. However, such methods fail to fully leverage the 3D priors of video diffusion models (VDMs) and often fall into the Inpainting Trap, resulting in subject inconsistency and degraded generation quality. To address this problem, we propose DepthDirector, a video re-rendering framework with precise camera controllability. By leveraging the depth video from explicit 3D representation as camera-control guidance, our method can faithfully reproduce the dynamic scene of an input video under novel camera trajectories. Specifically, we design a View-Content Dual-Stream Condition mechanism that injects both the source video and the warped depth sequence rendered under the target viewpoint into the pretrained video generation model. This geometric guidance signal enables VDMs to comprehend camera movements and leverage their 3D understanding capabilities, thereby facilitating precise camera control and consistent content generation. Next, we introduce a lightweight LoRA-based video diffusion adapter to train our framework, fully preserving the knowledge priors of VDMs. Additionally, we construct a large-scale multi-camera synchronized dataset named MultiCam-WarpData using Unreal Engine 5, containing 8K videos across 1K dynamic scenes. Extensive experiments show that DepthDirector outperforms existing methods in both camera controllability and visual quality. Our code and dataset will be publicly available.

teaser

Compared with warping-based methods

Here we display side-by-side videos comparing our method to top-performing warping-based methods.

Select a scene and a baseline method below:


Compared with implicit methods

Here we display side-by-side videos comparing our method to top-performing baseline ReCamMaster across a number of scenes, evaluating performance in generation quality and camera control precision.


Camera Trajectory

Generated Video

Input 1
Ours
ReCamMaster

Input 2
Ours
ReCamMaster

Input 10
Ours
ReCamMaster
DepthDirector (Ours)
ReCamMaster