Differential Privacy is now a gold standard for data privacy in many learning and statistical tasks. It has enjoyed over a decade of intense study, with focus on both upper and lower bounds in different settings for different problems. In the intersection of privacy and statistical estimation (henceforth called, ``private statistical estimation''), our understanding of fundamental problems has grown, but several open questions emerged in the process, which had not received adequate attention. The goal of this dissertation has been to identify and address some of these challenges. We tackle these problems with focus towards reducing the cost of privacy, whilst attaining near optimal accuracy. Specifically, we make progress in answering the following questions.--How to privately estimate the mean of distributions from various families?
--How to privately estimate the covariance of high-dimensional Gaussians with both sample and time efficiency?
--How to privately estimate the parameters of mixtures of high-dimensional Gaussians?
--When the data lies in some low-dimensional subspace, how do we privately learn that subspace with no cost in sample complexity in terms of the ambient dimension?
Future directions of private statistical estimation are also discussed.--Author's abstract
Identifer | oai:union.ndltd.org:NEU//neu:bz613973x |
Source Sets | Northeastern University |
Detected Language | English |
Page generated in 0.002 seconds