Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Grab it
Explore a thought-provoking lecture examining the theoretical challenges in understanding deep learning's successes and failures. Harvard professor Boaz Barak presents empirical evidence challenging conventional theories about how neural networks learn, covering three key findings: the similarity of internal representations across different training methods, non-monotonic learning patterns where performance temporarily degrades during training, and the complex nature of learning that cannot be simplified to layer-by-layer progression. Drawing from collaborative research with prominent scholars, delve into the intersection of approximation, optimization, and statistics to better understand deep learning's fundamental principles and limitations.