Главная
Study mode:
on
1
Intro
2
Math Myth of ML (circa 2008)
3
spaces between the Math
4
trust for whom?
5
Train a neural network to predict wolf v. husky
6
Explanations for neural network prediction
7
Accuracy vs Interpretability
8
Explaining predictions
9
Explaining prediction of Inception Neural Network
10
Anchors for Visual Question Answering
11
Type 1 Diabetes Management
12
Standard Intervention
13
Oversensitivity in image classification
14
Beyond Test-Set Accuracy
15
Closing the Loop with Simple Data Augmentation
16
Checklist: Test Linguistic Capabilities of Model
17
Checklist: Categories of Tests
18
Addressing Challenge of Test Creation
19
User Study: Quora Question Pairs (n=18, 2 hours)
20
Minding the Gap
21
Adaptive Loss Alignment (ALA)
22
And this gap is increasing with foundation models...
23
Optimizing for multiple metrics
Description:
Explore a comprehensive framework for building trust in machine learning and AI systems in this Stanford University seminar. Delve into Professor Carlos Ernesto Guestrin's discussion on the three pillars of clarity, competence, and alignment that can lead to more effective and trustworthy AI. Examine real-world examples, including visual question answering, type 1 diabetes management, and image classification, to understand the challenges and solutions in creating interpretable and reliable ML models. Learn about techniques such as explanations for neural network predictions, data augmentation, and adaptive loss alignment. Discover how to address the increasing complexity of foundation models and optimize for multiple metrics to enhance the trustworthiness of AI systems in various applications.

How Can You Trust Machine Learning?

Stanford University
Add to list
0:00 / 0:00