Explore a comprehensive lecture on safety and robustness in deep learning with provable guarantees, delivered by Marta Kwiatkowska from Oxford at the Alan Turing Institute. Delve into the challenges of developing automated certification techniques for learnt software components in safety-critical applications like self-driving cars and medical diagnosis. Examine the role of Bayesian learning and causality in ensuring adversarial robustness and safety of decisions. Gain insights into emerging directions in trustworthy artificial intelligence, including machine learning accountability, fairness, privacy, and safety. Cover topics such as big data, resilience testing, adversarial perturbations, software verification, deep feedforward neural networks, robustness, certification guarantees, interventional robustness, probabilistic verification, and regression safety. Engage with the latest research and discussions on the intersection of mathematics and deep learning, addressing the need for rigorous software development methodologies in increasingly complex computing systems.
Read more
Safety and Robustness for Deep Learning with Provable Guarantees - Marta Kwiatkowska - Oxford