Privacy Amplification from Structured Algorithmic Randomness
Автор: Simons Institute for the Theory of Computing
Загружено: 2026-02-24
Просмотров: 59
Описание:
Ayfer Ozgur (Stanford University)
https://simons.berkeley.edu/talks/ayf...
Learning from Heterogeneous Sources
Differentially private training methods typically rely on injecting external noise at each iteration, as in DP-SGD, to limit the influence of individual data points. In this talk, we will explore how inherent algorithmic randomness already embedded in modern AI training pipelines for non-privacy reasons can be harnessed for privacy amplification, thereby reducing reliance on externally injected noise.
Prior work has studied privacy amplification through user or data subsampling, but largely under idealized assumptions such as independent Poisson subsampling. In practice, training pipelines exhibit more structured, system-driven forms of randomness. The goal of this talk is twofold: first, to move beyond idealized subsampling models toward structured sampling mechanisms that better reflect real-world constraints; and second, to investigate additional sources of algorithmic randomness, including model partitioning, dropout, and compression, that naturally limit how much information any single sample or user contributes to the final model. We will discuss how these mechanisms can be rigorously quantified to strengthen privacy guarantees at scale.
Повторяем попытку...
Доступные форматы для скачивания:
Скачать видео
-
Информация по загрузке: