Главная
Study mode:
on
1
Intro
2
SS Learning: Invariant Representations
3
Pre-text Tasks: A Deeper Dive
4
Contrastive Learning: Entity Discrimination
5
Contrastive Learning: Problem
6
SimCLR: Simple Contrastive Learning Representatio
7
SimCLR: Architecture
8
SimCLR: Loss Function
9
SimCLR: Findings
10
MoCo V2: Momentum Contrast
11
MoCo V2: Architecture
12
MoCo V2: Main Principle
13
MoCoV2: Loss Function
14
MoCo V2: Findings
15
BYOL: Bootstrap Your Own Latent
16
BYOL: Architecture
17
BYOL: Main Principle
18
BYOL: Findings
19
SWAV: Swapping Assignments between Views
20
SWAV: Architecture
21
SWAV: Loss Function
22
SWAV: Main Principle
23
SWAV: Multi-crop
24
SWAV: Additional Findings
25
DINO: Self-Distillation with NO labels
26
DINO: Attention-Maps
27
VIT (Vision Transformer): Architecture
28
DINO: Architecture
29
DINO: Loss Function
30
DINO: Main Principle
31
DINO: Multi-crop
32
DINO: Additional Findings Compute
Description:
Explore self-supervised representation learning and contrastive techniques in computer vision through this comprehensive 58-minute lecture by Stanford University PhD student Nandita Bhaskhar. Dive deep into six recent frameworks: SimCLR, MoCo V2, BYOL, SwAV, DINO, and Barlow Twins. Examine their methodologies, performance, strengths, and weaknesses, with a focus on potential applications in the medical domain. Gain insights into how these techniques can leverage unlabeled datasets, overcoming the limitations of traditional supervised learning approaches. Learn about the speaker's research on observational supervision, self-supervision for medical data, and out-of-distribution detection for clinical deployment. Benefit from a thorough exploration of topics including invariant representations, pre-text tasks, entity discrimination, and various architectural approaches in self-supervised learning.

Self-Supervision & Contrastive Frameworks - A Vision-Based Review

Stanford University
Add to list
0:00 / 0:00