Explore self-supervised representation learning and contrastive techniques in computer vision through this comprehensive 58-minute lecture by Stanford University PhD student Nandita Bhaskhar. Dive deep into six recent frameworks: SimCLR, MoCo V2, BYOL, SwAV, DINO, and Barlow Twins. Examine their methodologies, performance, strengths, and weaknesses, with a focus on potential applications in the medical domain. Gain insights into how these techniques can leverage unlabeled datasets, overcoming the limitations of traditional supervised learning approaches. Learn about the speaker's research on observational supervision, self-supervision for medical data, and out-of-distribution detection for clinical deployment. Benefit from a thorough exploration of topics including invariant representations, pre-text tasks, entity discrimination, and various architectural approaches in self-supervised learning.
Self-Supervision & Contrastive Frameworks - A Vision-Based Review