Oddly Shaped Pegs

An inquiry into the Nature and Causes of Stuff

Posts Tagged ‘privacy

Tutorial Videos on and around Differential Privacy

leave a comment »

Aaron Roth and I organized a workshop on “Differential Privacy Across Computer Science” at DIMACS in the fall. Videos from the tutorials are now up (presumably they have been for a while, but I did not know it).
The tutorial speakers covered connections between DP and a range of areas:

  • Moritz Hardt: Differential private algorithms via learning theory
  • Gerome Miklau: Query optimization techniques from the DB community
  • Benjamin Pierce: Using PL techniques to automate and verify proofs of privacy
  • Aaron Roth: Game-theoretic perspectives on privacy
All four talks were excellent, and they are a great resource for people (interested in) getting into the field.
Those talks all assume at least passing familiarity with differential privacy. For a gentler introduction, my tutorial from CRYPTO 2012 is online. The first third or so of the talk is not on differential privacy at all, but rather surveys the attacks and privacy breaches that motivated approaches such as differential privacy.
Watching the video, I realize that my talk was very slow-paced, so you may prefer to just read the slides (or maybe watch the video at 2x ?):
Comments on any of the tutorials are welcome.

Written by adamdsmith

February 8, 2013 at 2:21 pm

DIMACS Workshop on Differential Privacy

with one comment

Aaron Roth and I are running a 3 day interdisciplinary workshop on differential privacy at DIMACS (Rutgers), on October 24-26. This is immediately following FOCS, which is being held nearby, in downtown New Brunswick. The workshop will begin with a day of tutorials on differential privacy as understood in various communities (theory, databases, programming languages, and game theory), and will continue with two days of research talks and discussion.

Details of the workshop can be found here: http://dimacs.rutgers.edu/Workshops/DifferentialPrivacy/
 (n.b.: some extra speakers have confirmed who are not yet on the web page).

As part of the program, we will also have a session of short (5-10 minute) talks from students, postdocs, and other interested parties. We encourage submission of abstracts for short talks. The solicitation is below.

See you all in October!
Aaron and Adam

DIMACS Workshop on Differential Privacy across Computer Science
October 24-26, 2012
(immediately after FOCS 2012)

Call for Abstracts — Short Presentations

The upcoming DIMACS workshop on differential privacy will feature invited talks by experts from a range of areas in computer science as well as short talks (5 to 10 minutes) by participants.

Participants interested in giving a short presentation should send an email to asmith+dimacs@psu.edu containing a proposed talk title, abstract, and the speaker’s name and affiliation. We will try to
accommodate as many speakers as possible, but

a) requests received before October 1 will get full consideration
b) priority will be given to junior researchers, so students and postdocs should indicate their status in the email.

More information about the workshop:

The last few years have seen an explosion of results concerning differential privacy across many distinct but overlapping communities in computer science: Theoretical Computer Science, Databases, Programming Languages, Machine Learning, Data Mining, Security, and Cryptography. Each of these different areas has different priorities and techniques, and despite very similar interests, motivations, and choice of problems, it has become difficult to keep track of this large literature across so many different venues. The purpose of this workshop is to bring researchers in differential privacy across all of these communities together under one roof to discuss recent results and synchronize our understanding of the field. The first day of the workshop will include tutorials, representing a broad cross-section of research across fields. The remaining days will be devoted to talks on the exciting recent results in differential privacy across communities, discussion and formation of interesting open problems, and directions for potential inter-community collaborations.

A tentative program and registration information can be found at

Written by adamdsmith

September 17, 2012 at 9:43 am

Postdocs in data privacy at Penn State

leave a comment »

I have been excessively delinquent in posting to this blog for the last little while (ok, two years). But a postdoc announcement is a terrible thing to hide from public view in the current economy.

Postdoctoral positions in statistical, computational and learning-theoretic aspects of data privacy

As part of a joint project between Penn State, CMU and Cornell, we are inviting applications for several postdoctoral positions at Penn State University.

The principal investigators at Penn State are:
Sofya Raskhodnikova,
Aleksandra Slavkovic  and
Adam Smith.

The other principal investigators on this project are:
Stephen Fienberg (CMU) and
John Abowd (Cornell).

We are looking for strong candidates interested in algorithmic, cryptographic, statistical and learning-theoretic aspects of data privacy. Candidates should have a Ph.D. in statistics, computer science or a related field and a strong record of original research. The positions are for one year, extendable to up to three years. The starting date is negotiable.

The project spans a broad range of activities from the exploration of foundational theory to the development of concrete methodology for the social and economic sciences. Postdoctoral fellows may be involved in any of these aspects, depending on their interests and expertise. Extended research visits at CMU and Cornell are possible, though not necessary.

Interested candidates should send a CV and brief research statement, along with the names of three references, to one of the three Penn State investigators (sofya@cse.psu.edu, sesa@stat.psu.edu, asmith@psu.edu). Applications received before February 25, 2012 will receive full consideration. Applications will continue to considered after that date until the position is filled.

Looking for a different postdoc?

In case the opportunity above isn’t your cup of tea, here are some public service tips on where to look for postdoc announcements.

… and that’s pretty much it. The postdoc market, especially in CS, is ridiculously inefficient. That’s partly because many postdocs (like mine) are project specific, and partly because there’s just no good central repository of relevant jobs.

With that in mind, I will mention the postdoc position in the theory of privacy and economics at the University of Pennsylvania. If you really want to do a postdoc on data privacy, and the Penn State/CMU/Cornell position won’t work for you, then talk to Aaron Roth (or Mike Kearns, Sham Kakade or Mallesh Pai).

Written by adamdsmith

February 4, 2012 at 10:43 pm

Posted in Getting Science Done

Tagged with , , ,

IPAM Workshop Wrap-Up

with 2 comments

Last week was the Statistical and Learning-Theoretic Challenges in Data Privacy, which I co-organized with Cynthia Dwork, Steve Fienberg and Sesa Slavkovic. As I explained in my initial post on the workshop, the goal was to tie together work on privacy in statistical databases with the theoretical foundations of learning and statistics.

The workshop was a success. For one thing, I got a new result out of it and lots of ideas for  problems to work on. I even had fun most of the time1.

— A shift in tone —

More importantly, I felt a different tone in the conversations and talks at this workshop than at a previous ones involving a similar crowd. For the first time, most participants seemed to agree on what the important issues are. I’ve spent lots of time hanging out with statisticians recently, so this feeling may not have been shared by everyone. But one change was objectively clear: the statisticians in the crowd have become much better at describing their problems in computational terms. I distinctly remember encountering fierce resistance, at the original 2005 CS-Stats privacy workshop in Bertinoro, when we reductionist CS types tried to get statisticians to spell out the procedures they use to analyze social science data.

“Analysis requires judgement. It is as much art as science,” they said (which we translated as, “Recursion, shmecursion. We do not know our own programs!”).

“But can’t you try to pin down some common objectives?”, we answered.

This week, there were algorithms and well-defined objectives galore. It helped that we had some polyglots, like Martin Wainwright and Larry Wasserman, around to translate.

— The “computational lens” at work —

An interesting feature of several talks was the explicit role of “computational” perspective. Both Frank McSherry and Yuval Nardi used techniques from numerical analysis, namely gradient ascent and the Newton-Raphson method, to design protocols which were both more efficient and easier to analyze than previous attempts based on a more global, structural perspective. Frank described a differentially private algorithm for logistic regression, joint with Ollie Williams; Yuval described an efficient SFE protocol for linear regression, joint with Steve Fienberg, Rob Hall, and others.

— Two under-investigated ideas —

At the wrap-up session (see the notes), I pointed out two directions that I think have been investigated with much less rigor than they deserve:

“Cryptanalysis” for database privacy

It would be nice to have a systematic study of, and standard nomenclature for, attacks on privacy/anonymity in statistical databases. Right now it seems every paper ends up defining (or not defining) a model from scratch, yet many papers are doing essentially the same thing in different domains. Even an incomplete taxonomy would be helpful. Here are a few terms I’d like to see becoming standard:

  • linkage attack
  • reconstruction attack
  • composition attack (my personal favorite)

On a related point, it would be nice to see a good categorization of the kinds of side information that gets used. For example, Johannes Gehrke at Cornell and his students have a few papers laying out categories of side information (I have issues with some of the  positive results in those papers, but I think the quantification of side information is interesting).

Relaxed definitions of privacy with meaningful semantics

This is probably a topic for a much longer post, but briefly: it would be nice to see meaningful definitions of privacy in statistical databases that exploit the adversary’s uncertainty about the data. The normal approach to this is to specify a set of allowable prior distributions on the data (from the adversary’s point of view). However, one has to be careful. The versions I have seen are quite brittle.  Some properties to keep in mind when considering new definitions:

  • Composition
  • Side information: is the class of priors rich enough to incorporate complex side information, such as an anonymization of a related database? [see composition above]
  • Convexity and post-processing, as in Dan Kifer’s talk
  • Equivalent, “semantic” characterizations [e.g. here, here]

— Other notes —

  • The majority of the talks were completely or partly on differential privacy. Notable exceptions: Brad Malin, Xiaofeng Wang, Ravi Kumar, Jiashun Jin, Yuval Nardi. Our goal was not to have such a preponderance of differential privacy talks, but some of the people we expected to talk about other things (like Jerry Reiter) decided to focus on differential privacy. Tailoring the talk to the crowd?
  • The nonspeaker participants were heavily skewed towards CS. In particular, at least [see comments!] four professors (Gerome Miklau, Anupam Gupta, Jonathan Katz, Yevgeniy Dodis) and three postdocs (Katrina Liggett, Anand Sarwate, Arvind Narayanan) from CS departments attended just to listen to the talks; I recognized only one stats postdoc (Saki Kinney). I also recognized lots of UCLA locals there too from CS (Yuval Ishai, Rafi Ostrovsky, Amit Sahai) but none from statistics.
  • The rump session + posters combination worked very well (despite my earlier doubts). Rump session slides are online.

1 Serious sleep deprivation due to jet-lagged kids and talk prep  made the “fun” part occasionally difficult.

Written by adamdsmith

March 4, 2010 at 8:55 pm

A private event

leave a comment »

It wasn’t exactly in stealth mode, but I heard about Data Privacy Day 2010 only after it happened.

Born of an effort to promote awareness of data privacy issues by the non-profit The Privacy Projects, this year’s celebration (?) included events at a several universities. Most interesting to me was a roundtable discussion at UC Berkeley sponsored by the Federal Trade Commission. I’m skeptical about how much the federal government will do about protecting privacy, but it is good to see serious interest.

This year’s events concentrated on consumer privacy and its apparent conflict with emerging business models. My recent research has been on handling privacy concerns in “statistical databases” — large collections of sensitive information that we would like to open up to wider scrutiny and analysis. Unsurprisingly, I would like to see “Data Privacy Day” also cover this aspect of data privacy. There is a danger, though, that the topic becomes too diffuse. What are really the most pressing privacy issues, and what should a broad “data privacy” awareness event cover?

UPDATE (2/3/10): Arvind was part of the roundtable and has some notes on it at 33 bits. He includes there some interesting comments on academics’ participation in policy discussions. I’ll add only that at Penn State, quite a few faculty members are involved in policy, but mostly away from the public eye. For example, two weeks ago I met with a White House official about privacy issues in the release of government data sets; I’ve also been involved in (executive branch) panels on government handling of biometric data. However, it is true that public participation in policy discussions by academics is limited. That may be because many academics realize they would make bad politicians; as Arvind notes, misaligned incentives also play a role.

Written by adamdsmith

February 2, 2010 at 2:02 pm

Posted in Data privacy

Tagged with ,

Differential privacy and the secrecy of the sample

with 5 comments

(This post was laid out lazily, using Luca‘s lovely latex2wp.)

— 1. Differential Privacy —

Differential privacy is a definition of “privacy” for statistical databases. Roughly, a statistical database is one which is used to provide aggregate, large-scale information about a population, without leaking information specific to individuals. Think, for example, of the data from government surveys (e.g. the decennial census or epidemiological studies), or data about a company’s customers that it would like a consultant to analyze.

The idea behind the definition is that users–that is, people getting access to aggregate information–should not be able to tell if a given individual’s data has been changed.

More formally, a data set is just a subset of items in a domain {D}. For a given data set {x\subset D}, we think of the server holding the data as applying a randomized algorithm {A}, producing a random variable {A(x)} (distributed over vectors, strings, charts, or whatever). We say two data sets {x,x'} are neighbors if they differ in one element, that is, {x\ \triangle\ x' =1}.

Definition 1 A randomized algorithm {A} is {\epsilon}-differentially private if, for all pairs of neighbor data sets {x,x'}, and for all events {S} in the output space of {A}:

\displaystyle \Pr(A(x)\in S) \leq e^\epsilon \Pr(A(x')\in S\,.

This definition has the flavor of indistinguishability in cryptography: it states that the random variables {A(x)} and {A(x')} must have similar distributions. The difference with the normal cryptographic setting is that the distance measure is multiplicative rather than additive. This is important for the semantics of differential privacy—see this paper for a discussion.

I hope to write a sequence of posts on differential privacy, mostly discussing aspects that don’t appear in published papers or that I feel escaped attention.

— 2. Sampling to Amplify Privacy —

To kick it off, I’ll prove here an “amplification” lemma for differential privacy. It was used implicitly in the design of an efficient, private PAC learner for the PARITY class in a FOCS 2008 paper by Shiva Kasiviswanathan, Homin Lee, Kobbi Nissim, Sofya Raskhodnikova and myself. But I think it is of much more general usefulness.

Roughly it states that given a {O(1)}-differentially private algorithm, one can get an {\epsilon}-differentially private algorithm at the cost of shrinking the size of the data set by a factor of {\epsilon}.

Suppose {A} is a {1}-differentially private algorithm that expects data sets from a domain {D} as input. Consider a new algorithm {A'}, which runs {A} on a random subsample of { \approx\epsilon n} points from its input:

Algorithm 2 (Algorithm {A'}) On input {\epsilon \in (0,1 )} and a multi-set {x\subseteq D}

  1. Construct a set {T\subseteq x} by selecting each element of {x} independently with probability {\epsilon}.
  2. Return {A(T)}.

Lemma 3 (Amplification via sampling) If {A} is {1}-differentially private, then for any {\epsilon\in(0,1)}, {A'(\epsilon,\cdot)} is {2\epsilon}-differentially private.

Read the rest of this entry »

Written by adamdsmith

September 2, 2009 at 12:19 pm

Insensitive attributes

leave a comment »

And the award for best blog post title of the day goes to…

There is an Elephant in the Room; & Everyone’s Social Security Numbers are Written on Its Hide.

The post, by Richard Power, reports on an article by Alessandro Acquisti and Ralph Gross,Predicting Social Security numbers from public data”, (faq, PNAS paper) which highlights how one can narrow down a US citizen’s social security number to a relatively small range based only on his or her state and date of birth.

As the Social Security Administration explained (see the elephantine blog post linked above), this was not really a secret; the SSA’s algorithm for generating SSN’s is public. The virtue of the Acquisti-Gross article is in pointing out the security implications of this clearly.

One of the interesting notions the study puts to rest is the distinction between “insensitive” and “sensitive” attributes. Almost anything can be used to identify a person, and once someone has a handle on you it is remarkably easy to predict, or find out, even more.

Written by adamdsmith

July 7, 2009 at 10:23 pm

Posted in Data privacy

Tagged with , ,