Главная
Study mode:
on
1
– Welcome to class
2
– Predictive models
3
– Multi-output system
4
– Notation factor graph
5
– The energy function Fx, y
6
– Inference
7
– Implicit function
8
– Conditional EBM
9
– Unconditional EBM
10
– EBM vs. probabilistic models
11
– Do we need a y at inference?
12
– When inference is hard
13
– Joint embeddings
14
– Latent variables
15
– Inference with latent variables
16
– Energies E and F
17
– Preview on the EBM practicum
18
– From energy to probabilities
19
– Examples: K-means and sparse coding
20
– Limiting the information capacity of the latent variable
21
– Training EBMs
22
– Maximum likelihood
23
– How to pick β?
24
– Problems with maximum likelihood
25
– Other types of loss functions
26
– Generalised margin loss
27
– General group loss
28
– Contrastive joint embeddings
29
– Denoising or mask autoencoder
30
– Summary and final remarks
Description:
Explore joint embedding methods and latent variable energy-based models (LV-EBMs) in this comprehensive lecture by Yann LeCun. Delve into predictive models, multi-output systems, and factor graph notation before examining energy functions and inference processes. Investigate conditional and unconditional EBMs, comparing them to probabilistic models. Learn about joint embeddings, latent variables, and their role in inference. Discover how to convert energy to probabilities and explore examples like K-means and sparse coding. Understand the training process for EBMs, including maximum likelihood and alternative loss functions. Examine contrastive joint embeddings and denoising autoencoders. Gain valuable insights into advanced machine learning concepts and techniques throughout this in-depth presentation.

Joint Embedding Method and Latent Variable Energy Based Models

Alfredo Canziani
Add to list
0:00 / 0:00