Separate Encoding for Coarse- grained Document Context
7
Self-attention/Transformers Across Sentences
8
Transformer-XL: Truncated BPTT+Transformer
9
Adaptive Span Transformers
10
Reformer: Efficient Adaptively Sparse Attention
11
How to Evaluate Document- level Models?
12
Document Problems: Entity Coreference
13
Mention(Noun Phrase) Detection
14
Components of a Coreference Model
15
Coreference Models:Instances
16
Mention Pair Models
17
Entity Models: Entity-Mention Models
18
Advantages of Neural Network Models for Coreference
19
End-to-End Neural Coreference (Span Model)
20
End-to-End Neural Coreference (Coreference Model)
21
Using Coreference in Neural Models
22
Discourse Parsing w/ Attention- based Hierarchical Neural Networks
23
Uses of Discourse Structure in Neural Models
Description:
Learn about document-level natural language processing tasks and models in this lecture from CMU's Neural Networks for NLP course. Explore techniques for handling long documents, including sentence-level and document-level tasks, entity coreference resolution, and discourse parsing. Examine approaches like infinitely passing state, separate encoding for document context, and self-attention across sentences. Dive into specific models such as Transformer-XL, Adaptive Span Transformers, and Reformer. Understand the components and challenges of coreference resolution, including mention detection and different modeling approaches. Discover how neural network models can be applied to coreference tasks and discourse parsing. Gain insights into evaluating document-level models and leveraging discourse structure in neural approaches.