Главная
Study mode:
on
1
Intro
2
What do we want to know about words?
3
A Manual Attempt: WordNet
4
An Answer (?): Word Embeddings!
5
How to Train Word Embeddings?
6
Distributional vs. Distributed Representations
7
Count-based Methods
8
Distributional Representations (see Goldberg 10.4.1) • Words appear in a context
9
Context Window Methods
10
Count-based and Prediction-based Methods
11
Glove (Pennington et al. 2014)
12
What Contexts?
13
Types of Evaluation
14
Non-linear Projection • Non-linear projections group things that are close in high- dimensional space eg. SNEA-SNE (van der Masten and Hinton 2008) group things that give each other a high probabilit…
15
t-SNE Visualization can be Misleading! (Wattenberg et al. 2016)
16
Intrinsic Evaluation of Embeddings (categorization from Schnabel et al 2015)
17
Extrinsic Evaluation: Using Word Embeddings in Systems
18
How Do I Choose Embeddings?
19
When are Pre-trained Embeddings Useful?
20
Limitations of Embeddings
21
Sub-word Embeddings (1)
Description:
Explore the fundamentals of word representations in natural language processing through this comprehensive lecture from Carnegie Mellon University's Neural Networks for NLP course. Delve into various approaches for modeling words, from manual attempts like WordNet to modern word embedding techniques. Examine the differences between distributional and distributed representations, and learn about count-based and prediction-based methods for training word embeddings. Investigate the importance of context in word representations and discover different evaluation techniques for assessing embedding quality. Analyze the strengths and limitations of word embeddings, and gain insights into choosing appropriate embeddings for specific tasks. Conclude by exploring sub-word embedding techniques to address limitations of traditional word-level representations.

CMU Neural Nets for NLP 2018 - Models of Words

Graham Neubig
Add to list