Главная
Study mode:
on
1
Intro
2
Overview
3
Deep Learning Background
4
Distributed / Federated Learning
5
Threat Model
6
Leakage from model updates
7
Property Inference Attacks
8
Infer Property Two-Party Experiment
9
Active Attack Works Even Better
10
Multi-Party Experiments
11
Visualize Leakage in Feature Space
12
Takeaways
Description:
Explore a conference talk that delves into the security vulnerabilities of collaborative machine learning techniques, focusing on unintended feature leakage. Learn about passive and active inference attacks that can exploit model updates to infer sensitive information about participants' training data. Discover how adversaries can perform membership inference and property inference attacks, potentially compromising privacy in distributed learning environments. Examine various tasks, datasets, and learning configurations to understand the scope and limitations of these attacks. Gain insights into possible defense mechanisms against such vulnerabilities in collaborative learning systems.

Exploiting Unintended Feature Leakage in Collaborative Learning - Congzheng Song

IEEE
Add to list
0:00 / 0:00