– Training a denoising autoencoder DAE PyTorch and Notebook
6
– Looking at a DAE kernels
7
– Comparison with state of the art inpainting techniques
8
– AE as an EBM
9
– Training a variational autoencoder VAE PyTorch and Notebook
10
– A VAE as a generative model
11
– Interpolation in input and latent space
12
– A VAE as an EBM
13
– VAE embeddings distribution during training
14
– Generative adversarial networks GANs vs. DAE
15
– Generative adversarial networks GANs vs. VAE
16
– Training a GAN, the cost network
17
– Training a GAN, the generating network
18
– A possible cost network's architecture
19
– The Italian vs. Swiss analogy for GANs
20
– Training a GAN PyTorch code reading
21
– That was it :D
Description:
Explore autoencoders (AE), denoising autoencoders (DAE), variational autoencoders (VAE), and generative adversarial networks (GAN) in this comprehensive video lecture. Learn to implement these models using PyTorch, analyze their kernels, and understand their applications in generative modeling and image processing. Dive into practical code examples, compare different architectures, and grasp key concepts such as latent space interpolation, embedding distributions, and cost network design. Gain insights into state-of-the-art inpainting techniques and discover how these models function as energy-based models (EBM). Benefit from clear explanations, including an intuitive Italian vs. Swiss analogy for understanding GANs, and conclude with a hands-on PyTorch code reading session for GAN implementation.
AE, DAE, and VAE with PyTorch - Generative Adversarial Networks and Code