Главная
Study mode:
on
1
Introduction
2
Two dominating paradigms to selfdriving
3
Direct perception
4
Conditional forints learning
5
Intermediate representations
6
More Related Findings
7
What is a good visual abstraction
8
InputOutput
9
No Crash Benchmark
10
Identifying Relevant Classes
11
Results
12
Qualitative Results
13
Summary
14
Dataset Overview
15
Illustrations
Description:
Explore a keynote presentation on label-efficient visual abstractions for autonomous driving. Delve into the trade-offs between annotation costs and driving performance in semantic segmentation-based approaches. Learn about practical insights for exploiting segmentation-based visual abstractions more efficiently, resulting in reduced variance of learned policies. Examine the impact of different segmentation-based modalities on behavior cloning agents in the CARLA simulator. Discover how to optimize intermediate representations for driving tasks, moving beyond traditional image-space loss functions to maximize safety and distance traveled per intervention.

Label Efficient Visual Abstractions for Autonomous Driving

Andreas Geiger
Add to list
0:00 / 0:00