Part 1: Density estimation of a restricted family of distributions
7
Our main technical tool - Sample compression schemes
8
A General Learning Problem
9
Examples of EMX problems
10
Binary classification (-- the "clean" case) The "Fundamental Theorem of Statistical Learning"
11
The case of Subset Probability Maximization
12
Non-equivalence for EMX
13
More Sample Compression
14
Monotone compression for subset probability maximization
15
Examples of such compression
16
A Quantitative version
17
A model theoretic observation
18
Discussion
19
New Challenges
Description:
Explore a seminar on theoretical machine learning focusing on learning probability distributions and their limitations. Delve into the fundamental statistical learning problem, examining the most ambitious framework and its proven impossibility. Discover density estimation of restricted distribution families and sample compression schemes as key technical tools. Investigate various examples of EMX problems, including binary classification and subset probability maximization. Analyze the "Fundamental Theorem of Statistical Learning" and its implications. Examine non-equivalence for EMX, monotone compression techniques, and quantitative versions of these concepts. Gain insights into model theoretic observations and discuss new challenges in the field of probability distribution learning.
Learning Probability Distributions - What Can, What Can't Be Done - Shai Ben-David