Главная
Study mode:
on
1
Introduction
2
Traditional Machine Learning
3
Deep Learning
4
Deep Learning Everywhere
5
Image Classification
6
ImageNet
7
Classification
8
Classification Notation
9
Classification Loss
10
Hypothesis Classes
11
Kernel Methods
12
Gaussian Kernel
13
Quadratic Loss
14
Summary
15
Statistical Learning Theory
16
Curse of Dimensionality
17
Gap for Learning
18
Proof
19
First Inequality
20
Defining Complexity
21
Empirical Complexity
22
NonEmpirical Complexity
23
The Gap
24
McDermotts Inequality
Description:
Delve into the first part of a comprehensive lecture on generalization theory in machine learning, presented by Adam Oberman from McGill University at the Institute for Pure & Applied Mathematics (IPAM). Explore the foundations of statistical learning theory, its similarities to classical approximation theory, and how it overcomes the curse of dimensionality using concentration of measure inequalities. Examine learning bounds for traditional machine learning methods like support vector machines (SVMs) and kernel methods, while discussing the challenges in applying these bounds to deep neural networks. Gain insights into image classification, hypothesis classes, kernel methods, and the intricacies of statistical learning theory. Investigate the curse of dimensionality, the gap for learning, and various complexities in machine learning. This 71-minute lecture serves as an essential resource for those seeking to understand the theoretical underpinnings of machine learning and its applications in high-dimensional spaces. Read more

Generalization Theory in Machine Learning

Institute for Pure & Applied Mathematics (IPAM)
Add to list