Главная
Study mode:
on
1
– Welcome to class
2
– Training methods revisited
3
– Architectural methods
4
– 1. PCA
5
– Q&A on Definitions: Labels, unconditional, and un, selfsupervised learning
6
– 2. Auto-encoder with Bottleneck
7
– 3. K-Means
8
– 4. Gaussian mixture model
9
– Regularized EBM
10
– Yann out of context
11
– Q&A on Norms and Posterior: when the student is thinking too far ahead
12
– 1. Unconditional regularized latent variable EBM: Sparse coding
13
– Sparse modeling on MNIST & natural patches
14
– 2. Amortized inference
15
– ISTA algorithm & RNN Encoder
16
– 3. Convolutional sparce coding
17
– 4. Video prediction: very briefly
18
– 5. VAE: an intuitive interpretation
19
– Helpful whiteboard stuff
20
– Another interpretation
Description:
Explore advanced machine learning techniques in this comprehensive lecture by Yann LeCun. Dive into Principal Component Analysis (PCA), Auto-encoders, K-means clustering, Gaussian mixture models, sparse coding, and Variational Autoencoders (VAE). Learn about training methods, architectural approaches, and regularized Energy-Based Models (EBM). Gain insights into unconditional regularized latent variable EBMs, amortized inference, convolutional sparse coding, and video prediction. Benefit from in-depth Q&A sessions on labels, supervised learning, norms, and posterior distributions. Enhance your understanding with practical examples using MNIST and natural patches, and explore intuitive interpretations of VAEs.

PCA, AE, K-Means, Gaussian Mixture Model, Sparse Coding, and Intuitive VAE

Alfredo Canziani
Add to list