Explore a comprehensive lecture on models with latent random variables in neural networks for natural language processing. Delve into the distinctions between generative and discriminative models, as well as deterministic and random variables. Gain insights into Variational Autoencoders, their architecture, and loss functions. Learn techniques for handling discrete latent variables, including the Gumbel-Softmax trick. Examine real-world applications of these concepts in NLP tasks, such as language modeling and semantic similarity. Engage with discussion questions to deepen understanding of tree-structured latent variables and their implications for NLP models.
Neural Nets for NLP - Models with Latent Random Variables