Explaining the Prediction Class Activation Mapping (CAM)
7
Explaining Model Prediction
8
Application to Medical Imaging
9
Explainable Al for Classification
10
Quantifying the Interpretability of Individual Units Network Dissection
11
Key units for classifying Living room
12
Key units for classifying Restaurant: Tables
13
Rapid Progress for Image Generation 2014
14
2021: Text2Image Model from OpenAI
15
Al Model for Image Generation
16
Identifying Causality in Latent Space
17
Pushing Latent Code to the Subspace Latent Space
18
Steering Generative Model
19
Steerable Al Model for Generation
20
Understanding the Role of Individual Units
21
Unsupervised Learning of Steerable Dimensions
22
Al Model for Machine Autonomy
23
Human-in-the-Loop Reinforcement Learning
24
Improving the diversity of the environment
25
Generalizability is improved by environment diversity
26
Impact Areas of Human-Centric Al
Description:
Explore a comprehensive lecture from ICCV'21 Tutorial on human-centric AI for computer vision and machine autonomy. Delve into the challenges and successes of AI, examining whether deep AI models can think like humans. Investigate explainable AI techniques such as Class Activation Mapping (CAM) for scene classification and medical imaging. Learn about network dissection, rapid progress in image generation, and identifying causality in latent space. Discover how to steer generative models and understand the role of individual units. Examine human-in-the-loop reinforcement learning and its impact on improving environment diversity and generalizability. Gain insights into the key impact areas of human-centric AI in this informative 39-minute presentation by Bolei Zhou.
Human-Centric AI for Computer Vision and Machine Autonomy - ICCV'21 Tutorial