Why Model Interactions in Output? . Consistency is important! time flies like an arrow
7
Sequence Labeling w
8
Recurrent Decoder
9
Teacher Forcing and Exposure Bias
10
An Example of Exposure Bias
11
Models w/ Local Dependencies
12
Local Normalization vs. Global Normalization
13
Conditional Random Fields
14
Potential Functions
15
BILSTM-CRF for Sequence Labeling
16
CRF Training & Decoding
17
Forward Calculation Middle Parts
18
Forward Calculation: Final Part • Finish up the sentence with the sentence final symbol
19
Revisiting the Partition Function
20
Training Details
21
Generalized Dynamic Programming Models • Decomposition Structure: What structure to use, and thus also what dynamic programming to perform? . Featurization: How do we calculate local scores?
Description:
Explore structured prediction with local independence assumptions in this lecture from CMU's Neural Networks for NLP course. Delve into the fundamentals of structured prediction, understand the importance of local independence assumptions, and learn about Conditional Random Fields (CRFs). Discover how to model interactions in output for consistency, examine sequence labeling with recurrent decoders, and understand the concepts of teacher forcing and exposure bias. Investigate models with local dependencies, comparing local and global normalization. Gain insights into BILSTM-CRF for sequence labeling, CRF training and decoding, and the intricacies of forward calculation. Examine the partition function, training details, and generalized dynamic programming models, including decomposition structures and featurization techniques.
Neural Nets for NLP 2021 - Structured Prediction with Local Independence Assumptions