Explore implicit neural scene representations in this 57-minute talk by Vincent Sitzmann at the Tübingen seminar series of the Autonomous Vision Group. Delve into the implications of signal representation for algorithm development, examining alternatives to discrete representations like pixel grids and point clouds. Learn about embedding implicit scene representations in neural rendering frameworks and leveraging gradient-based meta-learning for fast inference. Discover how these techniques enable 3D reconstruction from a single 2D image and generate features useful for semantic segmentation. Gain insights into the potential of neural scene representations for independent agent reasoning and complex scene modeling from limited observations.