Important preliminary: Shannon's mutual information
6
Theorem: Memorizing entire examples
7
Tasks: Mixtures of subpopulations
8
Tasks: Per-subpopulation distributions
9
Proof: Lower bounds via singletons
10
Experiments: Logistic regression and neural network
11
Setup: Learning from a stream of examples
12
Theorem: How Much Space?
13
Theorem: Example Memorization
14
Tasks: Space Lower Bounds for Natural Models
15
Proof: Structure and Overview
16
Proof: Requirements for distinguishing one bit
17
Main theorems and implications
18
Directions for future work
19
Memorize when you can't identify relevant information
Description:
Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Grab it
Explore the concept of example memorization in learning through this Google TechTalk presented by Gavin Brown as part of the Differential Privacy for ML seminar series. Delve into the meaning of "memorizing training examples" and examine empirical example memorization. Investigate the relationships between space, information, and deep learning, with a focus on Shannon's mutual information. Learn about theorems related to memorizing entire examples and tasks involving mixtures of subpopulations and per-subpopulation distributions. Discover proof techniques for lower bounds via singletons and examine experiments involving logistic regression and neural networks. Analyze the setup for learning from a stream of examples and explore theorems on space requirements and example memorization. Investigate space lower bounds for natural models and understand the structure and overview of proofs, including requirements for distinguishing one bit. Gain insights into the main theorems and their implications, and explore directions for future work in the field of example memorization in learning.
Read more
Example Memorization in Learning: Batch and Streaming - Differential Privacy for ML