Explore cutting-edge approaches to visual representation learning with limited labels in this 54-minute conference talk presented by Hamed Pirsiavash from USC Information Sciences Institute. Delve into self-supervised feature learning methods that leverage unlabeled data for more scalable and flexible image understanding tasks. Discover techniques for grouping similar images, iterative distillation from ensemble to student models, and representation compression. Gain insights into recent developments in adversarial robustness of deep models. Learn about mean-shift clustering for self-supervised learning, backdoor attacks on supervised and self-supervised learning, and the evolution of teacher models in iterative similarity distillation.
Learning Visual Representations with Limited Labels for Semantic Image Understanding