Главная
Study mode:
on
1
Intro
2
Collaborators
3
3D Representations
4
Limitations
5
Convolutional Occupancy Networks
6
Comparison
7
Object-Level Reconstruction
8
Training Speed
9
Scene-Level Reconstruction
10
Large-Scale Reconstruction
11
Key Insights
12
Problem Definition
13
Existing Representation
14
Overfitting to Single Objects
15
Single Object Experiments
16
Single Image Appearance Prediction
17
Single View Appearance Prediction
18
Generative Model
19
Materials
20
Joint Estimation of Pose, Geometry and SVBRDF
21
Qualitative Results
22
3D Annotations
Description:
Explore a keynote presentation on implicit neural representations for 3D scene reconstruction and understanding. Delve into advanced techniques for overcoming limitations of fully-connected network architectures in implicit approaches. Learn about a hybrid model combining neural implicit shape representation with 2D/3D convolutions for detailed object and large-scale scene reconstruction. Discover methods for capturing and manipulating visual appearance through surface light field representations. Gain insights into recent efforts in collecting real-world material information for training these models. Examine the KITTI-360 dataset, featuring 360-degree sensor data and semantic annotations for outdoor environments. Cover topics including convolutional occupancy networks, object-level and scene-level reconstruction, appearance prediction, generative models, and joint estimation of pose, geometry, and SVBRDF.

Implicit Neural Representations: From Objects to 3D Scenes

Andreas Geiger
Add to list
0:00 / 0:00