Supervised vs unsupervised learning Supervised Learning Unsupervised Learning
3
Generative modeling Goal Take as input training samples from some distribution and learn a model that represents that distribution
4
Why generative models? Debiasing
5
Why generative models? Outlier detection
6
What is a latent variable?
7
Autoencoders: background
8
Dimensionality of latent space reconstruction quality
9
Autoencoders for representation learning
10
Traditional autoencoders
11
VAEs: key difference with traditional autoencoder
12
VAE optimization
13
Intuition on regularization and the Normal prior
14
Reparametrizing the sampling layer
15
Why latent variable models? Debiasing
16
Generative Adversarial Networks (GANs)
17
Intuition behind GANS
18
Training GANs: loss function
19
GANs for image synthesis: latest results
20
Applications of paired translation
21
Paired translation: coloring from edges
22
Distribution transformations GANG
Description:
Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Grab it
Explore deep generative modeling in this comprehensive lecture from MIT's Introduction to Deep Learning course. Dive into the differences between supervised and unsupervised learning, understand the goals and applications of generative models, and learn about key concepts such as latent variables, autoencoders, and Variational Autoencoders (VAEs). Discover the intuition behind Generative Adversarial Networks (GANs), their training process, and their applications in image synthesis and paired translation. Gain insights into debiasing, outlier detection, and distribution transformations using these powerful deep learning techniques.