From "labels" to "explanations of labels" One explanation generalizes to many examples
8
Learning from Human Explanation
9
Our Focus: Natural Language Explanations
10
Learning with Human Explanations
11
Explanations to "labeling rules"
12
Generalizing explanations Matching labeling rules to create pseudo labeled data
13
Challenge: Language Variations
14
Neural Rule Grounding for rule generalization
15
A Learable, Soft Rule Matching Function
16
Neural Execution Tree (NEXT) for Soft Matching
17
Study on Label Efficiency (TACRED)
18
Results: Hate Speech (Binary) Classification
19
Take-aways . "One explanation generalizes to many examples" - better label efficiency vs. conventional supervision
Description:
Explore an explanation-based learning framework for natural language processing that improves label efficiency and model reliability in this 45-minute conference talk. Discover how human explanations can be leveraged to create more effective NLP systems with fewer training examples compared to traditional deep learning approaches. Learn about techniques like neural rule grounding and soft rule matching that allow models to generalize from explanations. Examine case studies demonstrating improved performance on tasks like relation extraction and hate speech classification using this framework. Gain insights into making NLP model development more accessible and less reliant on large labeled datasets or machine learning expertise.