Neural Representation Learning in Natural Language Processing
2
Neural Representation Learning for NLP
3
What is the word representation ?
4
Why should we learn word representation
5
How can we get word representations?
6
Symbolic or Distributed?
7
Supervised or Unsupervised?
8
Count-based or Prediction-based?
9
Case Study: NNLM
10
Case Study: Glove
11
Case Study: ELMO
12
Case Study: BERT
13
Software, Model, Corpus
14
Using non-contextualized when ...
15
Using contextualized when ...
16
What is the sentence representation
17
Why do we need sentence representations
18
How can we learn sentence representations?
19
Different Structural Biases
20
Clusters of Approaches
21
Case Study: Must-know Points about RN
22
CNN: 1d and 2d Convolution
23
CNN: Narrow/Equal/Wide Convolution
24
CNN: Multiple Filter Convolution
25
Case Study: Must-know Points about Transforme
Description:
Explore neural representation learning in natural language processing through this comprehensive lecture from the CMU Low Resource NLP Bootcamp 2020. Delve into various methods for learning neural representations of language, covering topics such as word and sentence representations, supervised and unsupervised learning approaches, and case studies on NNLM, Glove, ELMO, and BERT. Gain insights into different structural biases, clusters of approaches, and must-know points about RNN, CNN, and Transformer models. Learn when to use non-contextualized versus contextualized representations and understand the importance of software, models, and corpora in neural representation learning for NLP.