Главная
Study mode:
on
1
Intro
2
Probability Distributions in Data Sciences
3
1. Optimal Transport
4
Kantorovitch's Formulation
5
Optimal Transport Distances
6
Entropic Regularization
7
Sinkhorn Divergences
8
Sample Complexity
9
Density Fitting and Generative Models
10
Deep Discriminative vs Generative Models
11
Training Architecture
12
Automatic Differentiation
13
Examples of Images Generation
14
Generative Adversarial Networks
15
Open Problems
Description:
Explore optimal transport techniques for machine learning in this 42-minute conference talk by Gabriel Peyre from Ecole Normale Superieure. Delve into probability distributions in data sciences, focusing on Kantorovitch's formulation, optimal transport distances, and entropic regularization. Examine Sinkhorn divergences and sample complexity before transitioning to density fitting and generative models. Compare deep discriminative and generative models, discussing training architectures and automatic differentiation. Analyze examples of image generation and generative adversarial networks. Conclude by considering open problems in the field. This talk, part of the Isaac Newton Institute's workshop on "Approximation, sampling and compression in data science," bridges various mathematical aspects of data science, fostering collaboration among researchers in computational statistics, machine learning, optimization, information theory, and learning theory.

Optimal Transport for Machine Learning - Gabriel Peyre, Ecole Normale Superieure

Alan Turing Institute
Add to list