Oddly Shaped Pegs

An inquiry into the Nature and Causes of Stuff

Posts Tagged ‘interdisciplinary work

DIMACS Workshop on Differential Privacy

with one comment

Aaron Roth and I are running a 3 day interdisciplinary workshop on differential privacy at DIMACS (Rutgers), on October 24-26. This is immediately following FOCS, which is being held nearby, in downtown New Brunswick. The workshop will begin with a day of tutorials on differential privacy as understood in various communities (theory, databases, programming languages, and game theory), and will continue with two days of research talks and discussion.

Details of the workshop can be found here: http://dimacs.rutgers.edu/Workshops/DifferentialPrivacy/
 (n.b.: some extra speakers have confirmed who are not yet on the web page).

As part of the program, we will also have a session of short (5-10 minute) talks from students, postdocs, and other interested parties. We encourage submission of abstracts for short talks. The solicitation is below.

See you all in October!
Aaron and Adam

DIMACS Workshop on Differential Privacy across Computer Science
October 24-26, 2012
(immediately after FOCS 2012)

Call for Abstracts — Short Presentations

The upcoming DIMACS workshop on differential privacy will feature invited talks by experts from a range of areas in computer science as well as short talks (5 to 10 minutes) by participants.

Participants interested in giving a short presentation should send an email to asmith+dimacs@psu.edu containing a proposed talk title, abstract, and the speaker’s name and affiliation. We will try to
accommodate as many speakers as possible, but

a) requests received before October 1 will get full consideration
b) priority will be given to junior researchers, so students and postdocs should indicate their status in the email.

More information about the workshop:

The last few years have seen an explosion of results concerning differential privacy across many distinct but overlapping communities in computer science: Theoretical Computer Science, Databases, Programming Languages, Machine Learning, Data Mining, Security, and Cryptography. Each of these different areas has different priorities and techniques, and despite very similar interests, motivations, and choice of problems, it has become difficult to keep track of this large literature across so many different venues. The purpose of this workshop is to bring researchers in differential privacy across all of these communities together under one roof to discuss recent results and synchronize our understanding of the field. The first day of the workshop will include tutorials, representing a broad cross-section of research across fields. The remaining days will be devoted to talks on the exciting recent results in differential privacy across communities, discussion and formation of interesting open problems, and directions for potential inter-community collaborations.

A tentative program and registration information can be found at
http://dimacs.rutgers.edu/Workshops/DifferentialPrivacy/

Written by adamdsmith

September 17, 2012 at 9:43 am

ICITS 2012 — playing with the format

with 2 comments

I am the program chair for this year’s ICITS, the International Conference on Information-Theoretic Security. (The acronym is admittedly a bit of a mouthful. I like “ickets” as the pronunciation. That way, papers at ICITS are “pickets”, talks there are “tickets”, you get the idea.) ICITS will be held in Montreal right before CRYPTO, August 15-17, 2012.

ICITS occupies an interesting spot at the intersection of a few different fields: crypto, information theory, quantum computing and combinatorics. In the past, ICITS has worked like a normal computer science conference: papers are reviewed carefully, papers cannot have appeared at other conferences or journals, etc. However, because ICITS serves several different communities, the format has sometimes cost it good papers: some are lost to more specific or better-known venues in computer science, others are lost because conference “publication” doesn’t fit well with the culture in other fields, etc.

So to try to broaden participation and make the conference more scientifically useful, we’re shaking up the format this year with a two-track submission process. The “conference” track will operate like a traditional conference with the usual review process and published proceedings. The “workshop” track will operate more like an informal workshop, without published proceedings. Submissions to the former track will follow a traditional page-limited format. Submissions to the latter are much more flexible in format (they can range from full papers or to extended abstracts), and may consist of previously published papers or works in progress. For example, the workshop track would be a great place to come present your Crypto/Eurocrypt, QIP or ISIT paper to the other communities that work on info-theoretic security.

You can see the call for papers if you’re curious about the process. But most importantly, get your papers ready for submission! The deadlines are

  • March 12 for the regular track and
  • April 9 for workshop papers.

In addition to the contributed papers we will have a great slate of invited speakers from a broad range of disciplines. And did I mention that the program committee rocks?

Of course, the best part of this is that ICITS will be in Montreal in the summer time. Despite its French character, not all of Montreal goes on vacation in August (in fact, the city does shut down for two weeks, the “construction holidays”, that will be over by the time ICITS hits town). There are festivals, tasty food, nice weather and, for me, lots of friends and family to see.

So submit your papers! And attend!

Written by adamdsmith

February 9, 2012 at 12:05 am

IPAM Workshop Wrap-Up

with 2 comments

Last week was the Statistical and Learning-Theoretic Challenges in Data Privacy, which I co-organized with Cynthia Dwork, Steve Fienberg and Sesa Slavkovic. As I explained in my initial post on the workshop, the goal was to tie together work on privacy in statistical databases with the theoretical foundations of learning and statistics.

The workshop was a success. For one thing, I got a new result out of it and lots of ideas for  problems to work on. I even had fun most of the time1.

— A shift in tone —

More importantly, I felt a different tone in the conversations and talks at this workshop than at a previous ones involving a similar crowd. For the first time, most participants seemed to agree on what the important issues are. I’ve spent lots of time hanging out with statisticians recently, so this feeling may not have been shared by everyone. But one change was objectively clear: the statisticians in the crowd have become much better at describing their problems in computational terms. I distinctly remember encountering fierce resistance, at the original 2005 CS-Stats privacy workshop in Bertinoro, when we reductionist CS types tried to get statisticians to spell out the procedures they use to analyze social science data.

“Analysis requires judgement. It is as much art as science,” they said (which we translated as, “Recursion, shmecursion. We do not know our own programs!”).

“But can’t you try to pin down some common objectives?”, we answered.

This week, there were algorithms and well-defined objectives galore. It helped that we had some polyglots, like Martin Wainwright and Larry Wasserman, around to translate.

— The “computational lens” at work —

An interesting feature of several talks was the explicit role of “computational” perspective. Both Frank McSherry and Yuval Nardi used techniques from numerical analysis, namely gradient ascent and the Newton-Raphson method, to design protocols which were both more efficient and easier to analyze than previous attempts based on a more global, structural perspective. Frank described a differentially private algorithm for logistic regression, joint with Ollie Williams; Yuval described an efficient SFE protocol for linear regression, joint with Steve Fienberg, Rob Hall, and others.

— Two under-investigated ideas —

At the wrap-up session (see the notes), I pointed out two directions that I think have been investigated with much less rigor than they deserve:

“Cryptanalysis” for database privacy

It would be nice to have a systematic study of, and standard nomenclature for, attacks on privacy/anonymity in statistical databases. Right now it seems every paper ends up defining (or not defining) a model from scratch, yet many papers are doing essentially the same thing in different domains. Even an incomplete taxonomy would be helpful. Here are a few terms I’d like to see becoming standard:

  • linkage attack
  • reconstruction attack
  • composition attack (my personal favorite)

On a related point, it would be nice to see a good categorization of the kinds of side information that gets used. For example, Johannes Gehrke at Cornell and his students have a few papers laying out categories of side information (I have issues with some of the  positive results in those papers, but I think the quantification of side information is interesting).

Relaxed definitions of privacy with meaningful semantics

This is probably a topic for a much longer post, but briefly: it would be nice to see meaningful definitions of privacy in statistical databases that exploit the adversary’s uncertainty about the data. The normal approach to this is to specify a set of allowable prior distributions on the data (from the adversary’s point of view). However, one has to be careful. The versions I have seen are quite brittle.  Some properties to keep in mind when considering new definitions:

  • Composition
  • Side information: is the class of priors rich enough to incorporate complex side information, such as an anonymization of a related database? [see composition above]
  • Convexity and post-processing, as in Dan Kifer’s talk
  • Equivalent, “semantic” characterizations [e.g. here, here]

— Other notes —

  • The majority of the talks were completely or partly on differential privacy. Notable exceptions: Brad Malin, Xiaofeng Wang, Ravi Kumar, Jiashun Jin, Yuval Nardi. Our goal was not to have such a preponderance of differential privacy talks, but some of the people we expected to talk about other things (like Jerry Reiter) decided to focus on differential privacy. Tailoring the talk to the crowd?
  • The nonspeaker participants were heavily skewed towards CS. In particular, at least [see comments!] four professors (Gerome Miklau, Anupam Gupta, Jonathan Katz, Yevgeniy Dodis) and three postdocs (Katrina Liggett, Anand Sarwate, Arvind Narayanan) from CS departments attended just to listen to the talks; I recognized only one stats postdoc (Saki Kinney). I also recognized lots of UCLA locals there too from CS (Yuval Ishai, Rafi Ostrovsky, Amit Sahai) but none from statistics.
  • The rump session + posters combination worked very well (despite my earlier doubts). Rump session slides are online.

1 Serious sleep deprivation due to jet-lagged kids and talk prep  made the “fun” part occasionally difficult.

Written by adamdsmith

March 4, 2010 at 8:55 pm

IPAM Workshop on Privacy and Statistical (Learning) Theory

with 2 comments

I am on the organizing committee (with Cynthia Dwork, Steve Fienberg, and Sesa Slavkovic) for an upcoming workshop at UCLA’s Institute for Pure and Applied Mathematics (IPAM). The workshop is on the relationship between database privacy and the theoretical foundations of statistics and machine learning. It is imaginatively titled:

Statistical and Learning-Theoretic Challenges in Data Privacy
(February 22-26, 2010)

(because the catchier “What Can We Learn Privately?” was already taken).

The workshop web page describes the basic thrust pretty concisely:

The goal of workshop is to establish a coherent theoretical foundation for research on data privacy. This implies work on (1) how the conflicting goals of privacy and utility can or should be formulated mathematically; and (2) how the constraints of privacy—in their various incarnations—affect the accuracy of statistical inference and machine learning. In particular, the goal is to shed light on the interplay between privacy and concepts such as consistency and efficiency of estimators, generalization error of learning, robustness and stability of estimation algorithms, and the generation of synthetic data.

The workshop is born of (what I consider) an exciting research program with potential payoffs both for how sensitive data is managed (see, e.g., Abe Flaxman’s post on a recommendation for HIPAA’s overhaul) as well as statistics and statistical learning theory. For more detailed discussion, see:

Participation is open to essentially anyone; to make it easier, IPAM has funding to help some attendees with their travel costs, especially students and other junior researchers. You can apply through the IPAM web page.

Several excellent researchers have already confirmed that they will speak (see the web page for the current list). I am especially happy about the breadth of the areas they hail from: crypto, algorithms, social science statistics, nonparametric statistics, theoretical and applied machine learning, and health data privacy, among others.  Of special note, there will be four tutorials aimed at helping the diverse audience actually communicate:

  • Larry Wasserman and Martin Wainwright will speak about the basic foundations of statistics and statistical learning theory;
  • Two other people (possibly Cynthia Dwork and myself) will discuss the definitional approaches to privacy that have come out of the CS literature, especially differential privacy, and also the worst-case analysis perspective that is common to TCS papers.

The exact format and content of the tutorials is still t.b.d., so suggestions (either directly to me or via comments on this post) would be welcome.

Why the workshop?

Good interdisciplinary work is notoriously hard. The first barrier is linguistic: new terminology, definitions, measures of accuracy/loss, etc (“like a U.N. meeting without the benefit of translators”, as Dick Lipton recently put it, describing some of Blum and Furst’s initial interactions with AI folks). Nevertheless, the terminology barrier can be overcome relatively easily (i.e., on a scale of months or years) in theoretical fields with clean definitions and theorems, such as theoretical statistics and learning.

The more subtle barrier, and one that usually takes much more work to overcome, is one of perspective. Merely using the right language will get you partway, but “the wider point of view of [people in other fields] can be harder to grok” (Noam Nisan). What problem are they really trying to solve? What is their criterion for evaluating the quality or interest of a new idea? To add to the confusion, a field that looks monolithic from the outside may in fact be a highly heterogeneous mix of subdisciplines. For example, the question “what do statisticians actually (need to) do?”, which many of us working on data privacy have wondered aloud, has approximately as many answers as there are statisticians doing things…

As far as I can tell, these perspective barriers are best overcome by osmosis: spend as much time as possible interacting with a wide variety of people from the other field. I think theoretical work provides an especially fruitful venue for interaction because its products (namely definitions and theorems) are more universally interpretable. Of course, this opinion may simply be a result of my own preference for theoretical work…

So how does one get these interactions going? External stimuli, like deadlines for collaborative grant proposals, can help, but grant proposals require people who are already committed to working together. Workshops and conferences are also an important venue. Regarding data privacy, there were several successful workshops encouraging CS-statistics interaction: Bertinoro in 2005, NSF in 2007, CMU in 2007, DIMACS in 2008, NCHS in 2008 (no more web pages for two of those, unfortunately). The upcoming IPAM workshop is the first with an explicitly theoretical bent; I am hoping it will be an even greater success.

Written by adamdsmith

June 19, 2009 at 11:15 am