Paper overview part - U-Net architecture improvements
3
Classifier guidance explained
4
Intuition behind classifier guidance
5
Scaling classifier guidance
6
Diversity vs quality tradeoff and future work
7
Coding part - training a noise-aware classifier
8
Main training loop
9
Visualizing timestep conditioning
10
Sampling using classifier guidance
11
Core of the sampling logic
12
Shifting the mean - classifier guidance
13
Minor bug in their code and my GitHub issue
14
Outro
Description:
Dive into an in-depth video tutorial exploring the paper "Diffusion Models Beat GANs on Image Synthesis" and its accompanying code. Learn about U-Net architecture improvements, classifier guidance, and the intuition behind these concepts. Explore the training process for noise-aware classifiers, visualize timestep conditioning, and understand the core sampling logic. Discover how to implement classifier guidance, including the mean shift method, and gain insights into the trade-offs between diversity and quality in image synthesis. Examine a minor bug in the original code and follow along with practical coding examples throughout this comprehensive machine learning session.
Diffusion Models Beat GANs on Image Synthesis - ML Coding Series - Part 2