Главная
Study mode:
on
1
Intro
2
Great Empirical Success of Deep Models
3
Self-supervised Learning (SSL)
4
Contrastive Learning (CL)
5
Formulation of Contrastive Learning
6
Understanding of Contrastive Loss
7
What Deep Learning Brings?
8
Example: InfoNCE
9
Coordinate-wise Optimization
10
A Surprising Connection to Kernels
11
Overview of Nonlinear Analysis
12
Nonlinear Setting
13
Training Dynamics
14
1-layer 1-node nonlinear network
15
How to reduce the local roughness p(w)?
16
1-layer multiple node nonlinear network
17
Assumptions
18
Conditional Independence
19
What linear network cannot do
20
Global modulation
21
Feature Emergence
22
Experiment Setting
23
Model Architecture & Evaluation Metric
24
Visualization
25
Quadratic Loss versus InfoNCE
Description:
Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only! Grab it Explore a comprehensive presentation on contrastive learning delivered by Yuandong Tian from Meta for the Data Learning working group. Delve into the empirical success of deep models, self-supervised learning, and the formulation of contrastive learning. Examine the understanding of contrastive loss, the InfoNCE example, and coordinate-wise optimization. Discover a surprising connection to kernels and gain insights into nonlinear analysis. Investigate training dynamics, including 1-layer 1-node and multiple node nonlinear networks. Learn about conditional independence, global modulation, and feature emergence. Compare quadratic loss versus InfoNCE through experimental settings, model architecture, and evaluation metrics. This 58-minute talk, recorded on March 7, 2023, offers valuable insights for researchers and students developing new technologies based on Data Assimilation and Machine Learning.

Towards Better Understanding of Contrastive Learning

DataLearning@ICL
Add to list
0:00 / 0:00