Главная
Study mode:
on
1
How to allow deep learning on your data without revealing your data
2
TWO DISTINCT SETTINGS
3
FEDERATED LEARNING FRAMEWORK
4
PAST APPROACH 11 DIFFERENTIAL PRIVACY
5
PAST APPROACH 21 CRYPTOGRAPHY
6
Outline for rest of the talk
7
INSTAHIDE ENCRYPTION FOR DATA
8
INSTAHIDE INSPIRED BY MIXUP
9
INSTAHIDE: HOW IT WORKS
10
INSTAHIDE MINOR IMPACT ON ACCURACY
11
TEXTHIDE: BACKGROUND
12
TEXTHIDE: HOW IT WORKS
13
TEXTHIDE MINOR IMPACT ON ACCURACY
14
Released software
15
RECALL: TWO SETTINGS
16
Carlini et al.'s Attack Overview
17
Carlini et al.'s Attack Cubic running time
18
Carlini et al.'s Attack Limitations
19
CONCLUSIONS
Description:
Explore the challenges and solutions for enabling deep learning on private data without compromising privacy in this 23-minute conference talk by Sanjeev Arora from Princeton University. Delve into the concept of Federated Learning and the need for secure data sharing among multiple parties. Learn about InstaHide and TextHide, innovative methods for "encrypting" images and text to enhance data security. Examine the limitations of current Federated Learning frameworks and understand the potential vulnerabilities exposed by recent attacks. Discover how these encryption techniques, inspired by the MixUp data augmentation technique, aim to provide enhanced security in various applications. Analyze the Carlini et al. 2020 attack on InstaHide, which combines combinatorial algorithms and deep learning, and evaluate its implications for data privacy. Gain insights into the ongoing challenges and advancements in protecting sensitive data while enabling collaborative deep learning across multiple parties. Read more

How to Allow Deep Learning on Your Data Without Revealing Your Data

Institute for Pure & Applied Mathematics (IPAM)
Add to list