ModalNeRF: Neural Modal Analysis and Synthesis for Free-Viewpoint Navigation in Dynamically Vibrating Scenes
Résumé
Recent advances in Neural Radiance Fields enable the capture of scenes with motion. However, editing the motion is hard; no existing method allows editing beyond the space of motion existing in the original video, nor editing based on physics. We present the first approach that allows physically-based editing of motion in a scene captured with a single hand-held video camera, containing vibrating or periodic motion. We first introduce a Lagrangian representation, representing motion as the displacement of particles, which is learned while training a radiance field. We use these particles to create a continuous representation of motion over the sequence, which is then used to perform a modal analysis of the motion thanks to a Fourier transform on the particle displacement over time. The resulting extracted modes allow motion synthesis, and easy editing of the motion, while inheriting the ability for free-viewpoint synthesis in the captured 3D scene from the radiance field. We demonstrate our new method on synthetic and real captured scenes.
Fichier principal
modal_nerf_submission.pdf (24.08 Mo)
Télécharger le fichier
modal_nerf.mp4 (113.98 Mo)
Télécharger le fichier
teaser.jpg (120.65 Ko)
Télécharger le fichier
Format | Vidéo |
---|
Format | Figure, Image |
---|