Главная
Study mode:
on
1
Differentially private synthetic data for private LLM training
Description:
Watch a 46-minute research talk from Google DeepMind's Andreas Terzis exploring the intersection of differential privacy and synthetic data generation for training large language models. Delve into critical aspects of LLM development including alignment, trust mechanisms, watermarking techniques, and copyright considerations while learning about innovative approaches to maintain privacy in AI training data. Gain insights into how synthetic data can be generated with differential privacy guarantees to protect sensitive information while still producing effective training datasets for language models.

Differentially Private Synthetic Data for Private LLM Training

Simons Institute
Add to list
0:00 / 0:00