Главная
Study mode:
on
1
Intro
2
Differentiable programming
3
Beyond images and strings
4
What if inputs or outputs are permutations?
5
Examples
6
Goals
7
Permutations as inputs
8
SUQUAN embedding (Le Morvan and Vert, 2017)
9
Supervised ON (SUQUAN)
10
Experiments: CIFAR-10
11
Limits of the SUQUAN embedding
12
The Kendall embedding (Jiao and Vert, 2015, 2017)
13
Geometry of the embedding
14
Kendall and Mallows kernels
15
Applications
16
Remark
17
Permutations as intermediate / output?
18
Optimal transport (OT)
19
Differentiable permutation matrix
20
Differentiable sort and rank
21
Soft quantization and soft quantiles
22
Application: soft top-k loss
23
Application: learning to sort
24
Conclusion
Description:
Explore the intersection of statistics and computer science in machine learning through this 42-minute conference talk by Jean-Philippe Vert from Google Brain. Delve into innovative approaches for embedding permutations and relaxing ranking operators to integrate them into differentiable machine learning architectures. Discover how to analyze and predict preferences using continuous space representations of discrete combinatorial objects. Examine the SUQUAN and Kendall embeddings, optimal transport techniques, and applications such as soft top-k loss and learning to sort. Gain insights into the cross-fertilization between statistics and computer science in the era of Big Data, and understand the algorithmic paradigms underpinning modern machine learning.

Learning From Ranks, Learning to Rank - Jean-Philippe Vert, Google Brain

Alan Turing Institute
Add to list
0:00 / 0:00