Главная
Study mode:
on
1
Introduction
2
Discriminative vs generative
3
Observed vs latent variables
4
Quiz
5
Latent Variable Models
6
Types of latent random variables
7
Example
8
Loss Function
9
Variational inference
10
Reconstruction loss and kl regularizer
11
Regularized auto encoder
12
Regularized autoencoder
13
Learning the VAE
14
Reparameterization Trick
15
General
16
Language
17
VAE
18
Reparameterization
19
Motivation
20
Consistency
21
Semantic Similarity
22
Solutions
23
Free Bits
24
Weaken Decoder
25
Aggressive Inference Network
26
Handling Discrete latent variables
27
Discrete latent variables
28
Sampling discrete variables
29
Gumball softmax
30
Application examples
31
Discrete random variables
32
Tree structured latent variables
33
Discussion question
Description:
Explore a comprehensive lecture on models with latent random variables in neural networks for natural language processing. Delve into the distinctions between generative and discriminative models, as well as deterministic and random variables. Gain insights into Variational Autoencoders, their architecture, and loss functions. Learn techniques for handling discrete latent variables, including the Gumbel-Softmax trick. Examine real-world applications of these concepts in NLP tasks, such as language modeling and semantic similarity. Engage with discussion questions to deepen understanding of tree-structured latent variables and their implications for NLP models.

Neural Nets for NLP - Models with Latent Random Variables

Graham Neubig
Add to list
0:00 / 0:00