Explore the critical importance of using interpretable machine learning models for high-stakes decisions in this thought-provoking lecture by Cynthia Rudin from Duke University. Delve into the potential societal consequences of relying on black box models and their unreliable explanations. Discover the advantages of interpretable models, which provide faithful explanations of their computations. Examine real-world examples in seizure prediction for ICU patients and digital mammography. Learn about different types of machine learning problems, optimization techniques, and the balance between accuracy and interpretability. Gain insights into case-based reasoning, prototype layers, and the application of interpretable AI in medicine. Engage with interactive tools and demos that showcase the power of interpretable machine learning in addressing critical healthcare challenges.
Stop Explaining Black Box Machine Learning Models for High Stakes Decisions and Use Interpretable Models Instead - Cynthia Rudin