Главная
Study mode:
on
1
Intro
2
Why interpretability?
3
What is interpretability?
4
Two broad themes
5
Source Syntax in NMT
6
Why neural translations are the right length?
7
Fine grained analysis of sentence embeddings
8
What you can cram into a single vector: Probing sentence embeddings for linguistic properties
9
Issues with probing
10
Minimum Description Length (MDL) Probes
11
How to evaluate?
12
Explanation Techniques: gradient based importance scores
13
Explanation Technique: Extractive Rationale Generation
Description:
Explore model interpretation in neural networks for natural language processing through this comprehensive lecture from CMU's CS 11-747 course. Delve into the importance and definition of interpretability, examining two broad themes in the field. Investigate source syntax in neural machine translation and discover why neural translations achieve appropriate lengths. Analyze sentence embeddings in-depth, including probing techniques and their limitations. Learn about Minimum Description Length (MDL) Probes and evaluation methods. Examine explanation techniques such as gradient-based importance scores and extractive rationale generation. Gain valuable insights into the inner workings of neural models for NLP applications.

CMU Neural Nets for NLP: Model Interpretation

Graham Neubig
Add to list