Explore a keynote presentation on implicit neural representations for 3D scene reconstruction and understanding. Delve into advanced techniques for overcoming limitations of fully-connected network architectures in implicit approaches. Learn about a hybrid model combining neural implicit shape representation with 2D/3D convolutions for detailed object and large-scale scene reconstruction. Discover methods for capturing and manipulating visual appearance through surface light field representations. Gain insights into recent efforts in collecting real-world material information for training these models. Examine the KITTI-360 dataset, featuring 360-degree sensor data and semantic annotations for outdoor environments. Cover topics including convolutional occupancy networks, object-level and scene-level reconstruction, appearance prediction, generative models, and joint estimation of pose, geometry, and SVBRDF.
Implicit Neural Representations: From Objects to 3D Scenes