Why Model Interactions in Output? . Consistency is important!
9
A Tagger Considering Output Structure
10
Training Structured Models
11
Local Normalization and
12
The Structured Perceptron Algorithm . An extremely simple way of training (non-probabilistic) global models
13
Structured Perceptron Loss
14
Contrasting Perceptron and Global Normalization • Globally normalized probabilistic model
15
Structured Training and Pre-training
16
Cost-Augmented Decoding for Hamming Loss • Hamming loss is decomposable over each word • Solution: add a score - Cost to each incorrect choice during search
17
What's Wrong w/ Structured Hinge Loss?
Description:
Explore structured prediction basics in this lecture from CMU's Neural Networks for NLP course. Delve into the Structured Perceptron algorithm, structured max-margin objectives, and simple remedies to exposure bias. Learn about various types of prediction, the importance of modeling output interactions, and training methods for structured models. Examine local normalization, global normalization, and cost-augmented decoding for Hamming loss. Gain insights into sequence labeling, tagger considerations for output structure, and the challenges associated with structured hinge loss.
Neural Nets for NLP - Structured Prediction Basics