## Archive for **September 2009**

## Differential privacy and the secrecy of the sample

(This post was laid out lazily, using Luca‘s lovely latex2wp.)

** — 1. Differential Privacy — **

Differential privacy is a definition of “privacy” for statistical databases. Roughly, a statistical database is one which is used to provide aggregate, large-scale information about a population, without leaking information specific to individuals. Think, for example, of the data from government surveys (*e.g.* the decennial census or epidemiological studies), or data about a company’s customers that it would like a consultant to analyze.

The idea behind the definition is that *users*–that is, people getting access to aggregate information–should not be able to tell if a given individual’s data has been changed.

More formally, a data set is just a subset of items in a domain . For a given data set , we think of the server holding the data as applying a randomized algorithm , producing a random variable (distributed over vectors, strings, charts, or whatever). We say two data sets are *neighbors* if they differ in one element, that is, .

Definition 1A randomized algorithm is -differentially private if, for all pairs of neighbor data sets , and for all events in the output space of :

This definition has the flavor of *indistinguishability* in cryptography: it states that the random variables and must have similar distributions. The difference with the normal cryptographic setting is that the distance measure is multiplicative rather than additive. This is important for the semantics of differential privacy—see this paper for a discussion.

I hope to write a sequence of posts on differential privacy, mostly discussing aspects that don’t appear in published papers or that I feel escaped attention.

** — 2. Sampling to Amplify Privacy — **

To kick it off, I’ll prove here an “amplification” lemma for differential privacy. It was used implicitly in the design of an efficient, private PAC learner for the PARITY class in a FOCS 2008 paper by Shiva Kasiviswanathan, Homin Lee, Kobbi Nissim, Sofya Raskhodnikova and myself. But I think it is of much more general usefulness.

Roughly it states that given a -differentially private algorithm, one can get an -differentially private algorithm at the cost of shrinking the size of the data set by a factor of .

Suppose is a -differentially private algorithm that expects data sets from a domain as input. Consider a new algorithm , which runs on a random subsample of points from its input:

Algorithm 2 (Algorithm )On input and a multi-set

- Construct a set by selecting each element of independently with probability .
- Return .

Lemma 3 (Amplification via sampling)If is -differentially private, then for any , is -differentially private.