Главная
Study mode:
on
1
Intro
2
Big data
3
Examples
4
Safety
5
Resilience testing
6
Fatal crashes
7
An adversarial perturbation
8
Software verification
9
Machine learning
10
Deep feedforward neural networks
11
Neural networks and classifiers
12
Training and testing
13
Robustness
14
Safety of classification decisions
15
First approach
16
Lipsheets
17
Search for adversarial examples
18
Search for better adversarial examples
19
MSR for videos
20
Text classification
21
Certification guarantees
22
Summary
23
Questioning
24
interventional robustness
25
probabilistic verification
26
pointwise robustness
27
regression safety
28
High profile failures
29
We are scratching at the surface
30
Conclusion
31
Questions
Description:
Explore a comprehensive lecture on safety and robustness in deep learning with provable guarantees, delivered by Marta Kwiatkowska from Oxford at the Alan Turing Institute. Delve into the challenges of developing automated certification techniques for learnt software components in safety-critical applications like self-driving cars and medical diagnosis. Examine the role of Bayesian learning and causality in ensuring adversarial robustness and safety of decisions. Gain insights into emerging directions in trustworthy artificial intelligence, including machine learning accountability, fairness, privacy, and safety. Cover topics such as big data, resilience testing, adversarial perturbations, software verification, deep feedforward neural networks, robustness, certification guarantees, interventional robustness, probabilistic verification, and regression safety. Engage with the latest research and discussions on the intersection of mathematics and deep learning, addressing the need for rigorous software development methodologies in increasingly complex computing systems. Read more

Safety and Robustness for Deep Learning with Provable Guarantees - Marta Kwiatkowska - Oxford

Alan Turing Institute
Add to list
0:00 / 0:00