Главная
Study mode:
on
1
Intro
2
Research Question and Motivation
3
Why It is important to know?
4
Goal of the work?
5
The main Idea : Explainable Driving Mod
6
The Network Architecture
7
Preprocessing
8
Convolutional Feature Encoder
9
Vehicle Controller (3)
10
Strongly Aligned Attention (SAA)
11
Textual Explanation Generator • Explanation LSTM
12
Berkeley Deep Drive explanation Dataset
13
Training
14
Evaluation of Vehicle Controller
15
Comparing variants of Vehicle Controller
16
Attention under regularization
17
Evaluation of Explanation Generator
18
Human Evaluation
Description:
Explore the intricacies of self-driving vehicle technology in this 28-minute lecture from the University of Central Florida. Delve into the research question and motivation behind explainable driving models, understanding their importance and goals. Learn about the main idea of Explainable Driving Mod and its network architecture, including preprocessing, convolutional feature encoding, and vehicle controller components. Discover the Strongly Aligned Attention (SAA) mechanism and the Textual Explanation Generator with its Explanation LSTM. Examine the Berkeley Deep Drive explanation Dataset and the training process. Evaluate the vehicle controller, compare its variants, and analyze attention under regularization. Finally, assess the explanation generator through both automated and human evaluation methods.

Textual Explanation for Self-Driving Vehicles

University of Central Florida
Add to list