Skip to content

MiliLab/Awesome-DynRF

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

54 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

Awesome-DynRF

arXiv

Overview figure

Diverse methodological approaches under a unified representational framework

More content and details can be found in our Survey Paper: Advances in Radiance Field for Dynamic Scene: From Neural Field to Gaussian Field.

πŸ” Contents

  1. Abstract
  2. Taxonomy
  3. Benchmark
  4. Paper Lists
  5. Citation
  6. Contact

πŸ“Œ 1. Abstract

Dynamic scene representation and reconstruction have undergone transformative advances in recent years, catalyzed by breakthroughs in neural radiance fields and 3D Gaussian splatting techniques. While initially developed for static environments, these methodologies have rapidly evolved to address the complexities inherent in 4D dynamic scenes through an expansive body of research. Coupled with innovations in differentiable volumetric rendering, these approaches have significantly enhanced the quality of motion representation and dynamic scene reconstruction, thereby garnering substantial attention from the computer vision and graphics communities. This survey presents a systematic analysis of over 200 papers focused on dynamic scene representation using radiance field, spanning the spectrum from implicit neural representations to explicit Gaussian primitives. We categorize and evaluate these works through multiple critical lenses: motion representation paradigms, reconstruction techniques for varied scene dynamics, auxiliary information integration strategies, and regularization approaches that ensure temporal consistency and physical plausibility. We organize diverse methodological approaches under a unified representational framework, concluding with a critical examination of persistent challenges and promising research directions. By providing this comprehensive overview, we aim to establish a definitive reference for researchers entering this rapidly evolving field while offering experienced practitioners a systematic understanding of both conceptual principles and practical frontiers in dynamic scene reconstruction.

πŸ“š 2. Taxonomy

2.1 Motion Types

Overview figure

A 2D illustration of various motion types

Real-world environments exhibit diverse motion patterns that can be categorized hierarchically from specific to general types. We classify these patterns into rigid motion, articulated motion, general non-rigid motion, and hybrid motion, which combines multiple patterns.

2.2 Motion Representation Methods

Overview figure

Illustration of typical motion representation methods

2.3 Scene Representation

Overview figure

A unified framework to encapsulate various reconstruction paradigms

⚑ 3. Benchmark

Datasets Year Inputs Additional Annotations Motion
πŸŸ₯ Tanks and Temples 2017 Monocular videos 3D surface geometry Non-rigid Motion
πŸŸ₯ CMU Panoptic 2017 Multi-view videos, 480 VGA camera views, 10 RGB-D sensors 3D body pose, 3D facial landmarks, Transcripts + speaker ID Non-rigid Motion
πŸŸ₯ D-NeRF 2021 Monocular videos - Non-rigid Motion
πŸŸ₯ Plenoptic 2022 Multi-view videos Depth maps, RGB images, and calibration data Non-rigid Motion
πŸŸ₯ Tensor4d 2023 Multi-view videos captured by RGB cameras - Non-rigid Motion
πŸŸ₯ Epic Fields 2024 Monocular videos Semantic annotations for actions and objects, masks of hands and active objects Non-rigid Motion
🟨 KITTI 2012 Stereo images Stereo images, optical flow, visual odometry, 3D object detection, 3D tracking Rigid Motion
🟨 nuScenes 2020 1 LiDAR, 5 RADAR, 6 cameras, IMU, and GPS 3D bounding boxes, semantic categories, object attributes for 23 object classes Rigid Motion
🟨 Waymo 2020 High-resolution sensor data (LiDAR, camera, radar) 3D semantic segmentation labels, object trajectories, 3D maps Rigid Motion
🟨 KITTI-360 2022 Fisheye images, Pushbroom laser scans, Geo-localized vehicle poses Semantic instance annotations in 2D and 3D Accurate localization Rigid Motion
🟨 Virtual KITTI 2 2020 RGB images Semantic segmentation, instance segmentation, depth, optical flow, and scene flow Rigid Motion
🟨 NeRF On-The-Road 2023 Subset of Waymo open Dataset (dynamic driving scenes) Scene geometry, appearance, motion, and semantics via self-supervision Rigid Motion
🟨 Argoverse NVS 2024 High-res images from 7 ring cameras, 2 stereo cameras, LiDAR 3D cuboid annotations for 26 object categories, map-aligned poses, and HD maps Rigid Motion
🟦 RobustNeRF Dataset 2023 Multi-view videos with dynamic distractors Distractors modeled as outliers Dynamic Noise
🟩 People-Snapshot 2018 Monocular videos 3D body models, textures, and animation skeletons Articulated & Non-rigid
🟩 DynaCap 2021 Multi-view videos - Articulated & Non-rigid
🟩 ZJU-Mocap 2021 Multi-view videos - Articulated & Non-rigid
🟩 Neuman 2022 Monocular videos Human pose, shape, masks, camera poses, sparse scene model, and depth maps Articulated & Non-rigid
🟩 THuman4 2022 Multi-view videos Foreground segmentation, calibration data, and SMPL-X fitting Articulated & Non-rigid
🟩 ActorsHQ 2023 Multi-view videos from 160 synchronized cameras Axis-aligned bounding boxes, occupancy grids, Alembic format meshes Articulated & Non-rigid
🟩 CoP3D 2023 Monocular casual videos of different cats and dogs Camera parameters and object masks Articulated & Non-rigid

πŸ“œ 4. Paper Lists

πŸ“š 4.1 Survey

πŸ“ 4.1.1 perprint

Year Conference/Journal Paper Code Type
2020 arXiv Differentiable rendering: A survey Survey
2020 arXiv Neural volume rendering: Nerf and beyond Survey
2022 arXiv Nerf: Neural radiance field in 3d vision, a comprehensive review Survey
2023 arXiv BeyondPixels: A comprehensive review of the evolution of neural radiance fields Survey
2023 arXiv Neural radiance fields: Past, present, and future Survey
2024 arXiv Semantically-aware neural radiance fields for visual scene understanding: A comprehensive review Survey
2024 arXiv Neural radiance field in autonomous driving: A survey Survey
2024 arXiv How nerfs and 3d gaussian splatting are reshaping slam: A survey Survey
2024 arXiv NeRF in robotics: A survey Survey
2024 arXiv Neural Fields in Robotics: A Survey Survey

πŸ“„ 4.1.2 Paper

Year Conference/Journal Paper Code Type
2022 Computer Graphics Forum Neural fields in visual computing and beyond Survey
2024 IEEE TVCG 3d gaussian splatting as new era: A survey Survey
2024 Computational Visual Media Recent advances in 3d gaussian splatting Survey
2025 IEEE TCSVT 3d gaussian splatting: Survey, technologies, challenges, and opportunities Survey
2024 IEEE Access Gaussian splatting: 3d reconstruction and novel view synthesis, a review Survey
2022 ICSPS Human 3d avatar modeling with implicit neural representation: A brief survey Survey
2025 EAAI Benchmarking neural radiance fields for autonomous robots: An overview Survey
2024 CGF Recent Trends in 3D Reconstruction of General Non-Rigid Scenes Survey
2023 CGF State of the Art in Dense Monocular Non-Rigid 3D Reconstruction Survey
2024 RCIM Neural radiance fields in the industrial and robotics domain: Applications, research opportunities and use cases Survey
2024 Electronics A Brief Review on Differentiable Rendering: Recent Advances and Challenges Survey

πŸ—οΈ 4.2 Reconstruction with Rigid Motion

πŸ“ 4.2.1 perprint

Year Conference/Journal Paper Code Type
2023 Arxiv Prosgnerf: Progressive dynamic neural scene graph with frequency modulated auto-encoder in urban scenes Urban

πŸ“„ 4.2.2 Paper

Year Conference/Journal Paper Code Type
2024 ECCV Street gaussians: Modeling dynamic urban scenes with gaussian splatting Code Urban
2021 CVPR Neural scene graphs for dynamic scenes Code Urban
2024 CVPR Multi-level neural scene graphs for dynamic urban environments Code Urban
2024 CVPR 3d geometry-aware deformable gaussian splatting for dynamic view synthesis Code Urban
2021 CVPR Star: Selfsupervised tracking and reconstruction of rigid objects in motion with neural rendering Indoor
2022 CVPR Panoptic neural fields: A semantic object-aware neural scene representation Urban
2024 CVPR Hugs: Holistic urban 3d scene understanding via gaussian splatting Code Urban
2023 ICLR S-nerf: Neural radiance fields for street views Code Urban
2024 CVPR Drivinggaussian: Composite gaussian splatting for surrounding dynamic autonomous driving scenes Code Urban
2023 CVPR Unisim: A neural closedloop sensor simulator Project Site Urban
2024 CVPR Neurad: Neural rendering for autonomous driving Code Urban

πŸ•Ί 4.3 Reconstruction with Articulated Motion

πŸ“ 4.3.1 Human Body-Perprint

Year Conference/Journal Paper Code Type
2022 Arxiv Generalizable neural performer: Learning robust radiance fields for human novel view synthesis Code Human Body
2023 Arxiv Splatarmor: Articulated gaussian splatting for animatable humans from monocular rgb video Human Body
2024 Arxiv Bags: Building animatable gaussian splatting from a monocular video with diffusion priors Code Human Body

πŸ“„ 4.3.2 Human Body-Paper

Year Conference/Journal Paper Code Type
2022 CVPR Humannerf: Freeviewpoint rendering of moving people from monocular video Code Human Body
2021 CVPR Animatable neural radiance fields for modeling dynamic human bodies Code Human Body
2023 NeurIPS Neural human performer: Learning generalizable radiance fields for human performance rendering Code Human Body
2023 CVPR Vid2avatar: 3d avatar reconstruction from videos in the wild via self-supervised scene decomposition Human Body
2023 ICCV Npc: Neural point characters from video Code Human Body
2022 ECCV Tava: Template-free animatable volumetric actors Code Human Body
2021 TOG Neural actor: Neural free-view synthesis of human actors with pose control Human Body
2021 ICCV Neural articulated radiance field Code Human Body
2021 NeurIPS Neural human performer: Learning generalizable radiance fields for human performance rendering Code Human Body
2023 CVPR Monohuman: Animatable human neural field from monocular video Code Human Body
2022 CVPR Structured local radiance fields for human avatar modeling Human Body
2023 CVPR Instant-NVR: Instant neural volumetric rendering for human-object interactions from monocular RGBD stream Human Body
2023 CVPR Instantavatar: Learning avatars from monocular video in 60 seconds Code Human Body
2021 ICCV Snarf: Differentiable forward skinning for animating non-rigid neural implicit shapes Human Body
2023 TPAMI Fast-snarf: A fast deformer for articulated neural fields Code Human Body
2022 CVPR Pina: Learning a personalized implicit neural avatar from a single rgb-d video sequence Human Body
2023 CVPR X-avatar: Expressive human avatars Code Human Body
2021 CVPR Pixel-aligned volumetric avatars Human Head
2024 CVPR 4k4d: Real-time 4d view synthesis at 4k resolution Human Performance
2021 ACM TOG Real-time deep dynamic characters Human Performance
2021 NeurIPS A-nerf: Articulated neural radiance fields for learning human shape, appearance, and pose Code Human Body
2021 ICCV Neural articulated radiance field Code Human Body
Nasa neural articulated shape approximation
2021 CVPR Lasr: Learning articulated shape reconstruction from a monocular video Code Human Body
2021 NeurIPS Viser: Video-specific surface embeddings for articulated 3d shape reconstruction Code Human Body
πŸ‘† NeRF-based πŸ‘‡ 3DGS-based
2024 CVPR Hugs: Human gaussian splats Code Human Body
2024 CVPR Gart: Gaussian articulated template models Code Human Body
2024 CVPR Expressive wholebody 3D gaussian avatar Code Human Body and Face
2024 ECCV Gauhuman: Articulated gaussian splatting from monocular human videos Code Human Body
2024 CVPR Animatable gaussians: Learning pose-dependent gaussian maps for highfidelity human avatar modeling Code Human Body
2024 CVPR Ash: Animatable gaussian splats for efficient and photoreal human rendering Code Human Body
2024 CVPR 3dgs-avatar: Animatable avatars via deformable 3d gaussian splatting Code Human Body
2024 CVPR Animatable gaussians: Learning pose-dependent gaussian maps for highfidelity human avatar modeling Code Human Body
2024 CVPR Gaussianavatar: Towards realistic human avatar modeling from a single video via animatable 3d gaussians Code Human Body
2024 CVPR Splattingavatar: Realistic real-time human avatars with mesh-embedded gaussian splatting Code Human Body and Face
2024 CVPR Gomavatar: Efficient animatable human modeling from monocular video using gaussians-on-mesh Code Human Body
2024 IJCV Moda: Modeling deformable 3d objects from casual videos Code Human Body

πŸ“„ 4.3.3 Animal-Paper

Year Conference/Journal Paper Code Type
2017 CVPR 3d menagerie: Modeling the 3d shape and pose of animals Animal
2022 ECCV Who left the dogs out? 3d animal reconstruction with expectation maximization in the loop Code Animal
2023 CVPR Reconstructing animatable categories from videos Code Animal
2022 NeurIPS Lassie: Learning articulated shapes from sparse image ensemble via 3d part discovery Code Animal
2022 SIGGRAPH Artemis: articulated neural pets with appearance and motion synthesis Code Animal
2022 CVPR Banmo: Building animatable 3d neural models from many casual videos Code Animal
2023 CVPR Magicpony: Learning articulated 3d animals in the wild Code Animal
2023 CVPR Common pets in 3d: Dynamic new-view synthesis of real-life deformable categories Code Animal
2024 ECCV Animal avatars: Reconstructing animatable 3D animals from casual videos Code Animal

πŸ“„ 4.3.4 Hand-Paper

Year Conference/Journal Paper Code Type
2022 CVPR Lisa: Learning implicit shape and appearance of hands Hand
2023 ICCV Livehand: Real-time and photorealistic neural hand rendering Code Hand
2024 CVPR URhand: Universal relightable hands Code Hand
2023 CVPR Relightablehands: Efficient neural relighting of articulated hand models Hand
2025 TPAMI HandRT: Simultaneous hand shape and appearance reconstruction with pose tracking from monocular RGB-d video Code Hand
2022 CVPR What’s in your hands? 3d reconstruction of generic objects in hands Code Hand

πŸ“ 4.3.5 Object-Perprint

Year Conference/Journal Paper Code Type
2025 arXiv Artgs: Building interactable replicas of complex articulated objects via gaussian splatting Code Object

πŸ“„ 4.3.6 Object-Paper

Year Conference/Journal Paper Code Type
2022 ICLR Clanerf: Category-level articulated neural radiance field Object
2023 ICCV Paris: Part-level reconstruction and motion analysis for articulated objects Code Object
2024 ECCV Leia: Latent view-invariant embeddings for implicit 3d articulation Object
2024 CVPR Reacto: Reconstructing articulated objects from a single video Code Object

πŸ€– 4.4 Reconstruction with Nor-rigid Motion

πŸ“„ 4.4.1 4D Spacetime-Paper

Year Conference/Journal Paper Code Type
2021 CVPR Space-time neural irradiance fields for free-viewpoint video Code General Motion
2024 ICLR Real-time photorealistic dynamic scene representation and rendering with 4d gaussian splatting Code General Motion
2022 CVPR Neural 3d video synthesis from multi-view video Code General Motion
2023 CVPR Suds: Scalable urban dynamic scenes Code Urban
2023 ACM TOG Neural volumes: Learning dynamic renderable volumes from images Code Object
2022 NeurIPS Neural surface reconstruction of dynamic scenes with monocular rgbd camera Code Object

πŸ“„ 4.4.2 Canonical Space with Deformation Field-Paper

Year Conference/Journal Paper Code Type
2021 CVPR D-nerf: Neural radiance fields for dynamic scenes Code Object
2021 ICCV Nerfies: Deformable neural radiance fields Code General Motion
2021 ACM TOG Hypernerf: a higher-dimensional representation for topologically varying neural radiance fields Code General Motion
2024 CVPR 4d gaussian splatting for real-time dynamic scene rendering Code General Motion
2024 CVPR Deformable 3d gaussians for high-fidelity monocular dynamic scene reconstruction Code General Motion
2025 ICLR MoDGS: Dynamic Gaussian Splatting from Causuallycaptured Monocular Videos Code General Motion
2022 NeurIPS Neural surface reconstruction of dynamic scenes with monocular rgbd camera Code Object

πŸ“„ 4.4.3 Frame-to-Frame Flow Field-Paper

Year Conference/Journal Paper Code Type
2021 ICCV Dynamic view synthesis from dynamic monocular video Code General Motion
2021 ICCV Neural radiance flow for 4d view synthesis and video processing Code General Motion
2024 ICLR Emernerf: Emergent spatial-temporal scene decomposition via selfsupervision Code Urban
2021 CVPR Neural scene flow fields for space-time view synthesis of dynamic scenes General Motion
2019 ICCV Occupancy flow: 4d reconstruction by learning particle dynamics Code Human
2023 CVPR Common pets in 3d: Dynamic new-view synthesis of real-life deformable categories Code Animal
2023 ICCV Mononerf: Learning a generalizable dynamic radiance field from monocular videos Code General Motion
2023 CVPR Dynpoint: Dynamic neural point for view synthesis code General Motion

πŸ“ 4.4.4 Point Tracking-Perprint

Year Conference/Journal Paper Code Type
2021 arXiv Neural trajectory fields for dynamic novel view synthesis [Code]

πŸ“„ 4.4.5 Point Tracking-Paper

Year Conference/Journal Paper Code Type
2024 3DV Dynamic 3d gaussians: Tracking by persistent dynamic view synthesis Code General Motion
2021 arXiv Neural trajectory fields for dynamic novel view synthesis

πŸ“„ 4.4.6 Factorization-Paper

Year Conference/Journal Paper Code Type
2024 CVPR 4d gaussian splatting for real-time dynamic scene rendering Code General Motion
2023 CVPR K-planes: Explicit radiance fields in space, time, and appearance Code General Motion
2023 CVPR Hexplane: A fast representation for dynamic scenes Code General Motion
2023 CVPR Tensor4d: Efficient neural 4d decomposition for high-fidelity dynamic reconstruction and rendering Code General Motion
2024 ECCV Splatfields: Neural gaussian splats for sparse 3d and 4d reconstruction Code Object
2024 3DV Fast High Dynamic Range Radiance Fields for Dynamic Scenes Code General Motion

πŸ“„ 4.4.7 Reconstructing with Hybrid Motion-Paper

Year Conference/Journal Paper Code Type
2024 ICLR OmniRe: Omni Urban Scene Recon struction Code Urban
2022 ECCV Tava: Template-free animatable volumetric actors Code Human
2024 ECCV Expressive whole-body 3D gaussian avatar Code Human
2022 ECCV Neuman: Neural human radiance field from a single video Code Human
2024 CVPR Gomavatar: Efficient animatable human modeling from monocular video using gaussians-on-mesh Code Human
2023 CVPR Learning neural volumetric representations of dynamic humans in minutes Code Human

πŸ“š 5. Citation


@misc{fan2025advancesradiancefielddynamic,
      title={Advances in Radiance Field for Dynamic Scene: From Neural Field to Gaussian Field}, 
      author={Jinlong Fan and Xuepu Zeng and Jing Zhang and Mingming Gong and Yuxiang Yang and Dacheng Tao},
      year={2025},
      eprint={2505.10049},
      archivePrefix={arXiv},
      primaryClass={cs.CV},
      url={https://arxiv.org/abs/2505.10049}, 
}

πŸ“­ 6. Other Resources

Related Projects

Tools

About

Official repo for "Advances in Radiance Field for Dynamic Scene: From Neural Field to Gaussian Field"

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors