Differential Privacy Preservation in Deep Learning

Hai Phan
Ying Wu College of Computing


In recent years, advances in deep learning have enabled a dizzying array of applications such as data analytics, signal and information processing, and autonomous systems. This presents an obvious threat to privacy in new deep learning systems and models, which are being developed and deployed. In this talk, I will review the current picture of security and privacy in deep learning. Then, I will introduce our approaches to preserve differential privacy in deep learning and to uncover vulnerabilities of differentially private deep neural networks. In the first approach, we propose the use of polynomial approximations, e.g., Chebyshev expansion, Taylor expansion, etc., to derive the approximate polynomial representation of objective functions used in deep learning. Then, Laplace noise will be injected into polynomial coefficients to preserve differential privacy. In the second approach, I will present our Adaptive Laplace Mechanism to redistribute the privacy budget to maximize the model utility under the same privacy loss. Future directions in privacy and security in deep learning will be introduced as well.