Explore a comprehensive privacy analysis of deep learning in this 17-minute IEEE conference talk. Delve into the susceptibility of deep neural networks to inference attacks and examine white-box inference techniques for both centralized and federated learning models. Discover novel membership inference attacks that exploit vulnerabilities in stochastic gradient descent algorithms. Investigate why deep learning models may leak training data information and learn how even well-generalized models can be vulnerable to white-box attacks. Analyze privacy risks in federated learning settings, including active membership inference attacks by adversarial participants. Gain insights into experimental setups, attacks on pretrained models, and the implications for privacy in deep learning systems.