Главная
Study mode:
on
1
Intro
2
Generative Models
3
Adversarial Training
4
Basic Paradigm
5
Problems with Generation • Over-emphasis of common outputs, fuzziness Adversarial
6
Training Method
7
In Equations
8
Problems w/ Training
9
Applications of GAN Objectives to Language
10
Problem! Can't Backprop through Sampling
11
Solution: Use Learning Methods for Latent Variables
12
Discriminators for Sequences
13
Stabilization Trick
14
Interesting Application: GAN for Data Cleaning (Yang et al. 2017)
15
Adversaries over Features vs. Over Outputs
16
Learning Domain-invariant Representations (Ganin et al. 2016) • Learn features that cannot be distinguished by domain
17
Adversarial Multi-task Learning (Liu et al. 2017)
18
Implicit Discourse Connection Classification w/ Adversarial Objective
19
Professor Forcing (Lamb et al. 2016)
20
Unsupervised Style Transfer for Text (Shen et al. 2017)
Description:
Explore adversarial learning in neural networks for natural language processing in this lecture from CMU's CS 11-747 course. Dive into generative adversarial networks (GANs), examining their application to both features and outputs in NLP tasks. Learn about the challenges of using GANs with discrete outputs and discover techniques for overcoming these obstacles. Investigate the use of adversarial training methods for domain adaptation, multi-task learning, and unsupervised style transfer in text. Gain insights into stabilization tricks and innovative applications like GAN-based data cleaning. Enhance your understanding of advanced NLP concepts through practical examples and theoretical explanations provided by Professor Graham Neubig.

Neural Nets for NLP 2017 - Adversarial Learning

Graham Neubig
Add to list