Oddly Shaped Pegs

An inquiry into the Nature and Causes of Stuff

Je m’en (af)fiche

Tucked neatly between (US) Independence Day and (French) Bastille Day this year was YESS, a joint French-US workshop for “young scientists and engineers”, held at the French embassy. This year’s theme was Identity Management, which largely meant biometrics, database privacy and anonymous communication. I wasn’t sure from the initial invitation how serious the whole thing was (note to organizers: explicit mention of freely flowing champagne in a workshop invitation is appealing, but a little suspicious), but it turned out to be a great workshop.

Ok, almost great. There were a number of program/department overviews by French and US officials, some of which made me feel like Bill Gasarch did in this picture. (To the speakers’ credit, no one made jokes about freedom fries even though fries were served in the Embassy cafeteria.) But the scientific talks were interesting, and I got to meet a cross-section of French researchers I don’t normally interact with. I also learned a lot about Tor and it’s role in Iran from Roger Dingledine.

I walked away from “YESS” with a few distinct impressions.

• It is extremely difficult to give a program or departmental overview that is fun to listen to.
• The French are a few years into the process of switching to a competitive, proposal-driven mode of funding research.
• A significant fraction of the biometrics/security community read the fuzzy extractor papers and got the point (in a nutshell: it is possible to be rigorous about the security properties of biometric systems). Unfortunately, that fraction is a lot less than 1.
• Making scientific posters is a questionable use of anyone’s time.

The first point is self-explanatory, I’ll save the second for a future discussion and I haven’t figured out what to make of the third.

But the posters! Called post-ère in French (why not affiche? Nobody knew.), the scientific poster lies somewhere between talk slides and a regular written article. In its ideal form it provides both a quick overall impression of a work, for the casual passer-by, and more in-depth explanations, for the viewer with more time and patience. In practice, it is  hacked together in a hurry and provides neither. For a typical failure, see my own feeble attempt here.

However, the real problem with posters is that even the good ones don’t seem to get read. It is almost always more compelling to listen to a bad talk than to read a good poster. What gives? By all rights, a poster session should be more interesting than a typical session of talks since you can allocate your time and attention more flexibly. But it doesn’t work — posters just don’t seem to make people care.

What is the best format for allowing everyone to get a short overviews of all the papers, and also have the option of learning more about a given paper? Is the web pushing science into a post-post-ère era? Or do I just need to learn to read?

I am troubled by this since the IPAM workshop I’m co-organizing will have time for a relatively small number of talks, but we want to give all attendees a chance to present their ideas. The initial plan was to have a poster session, but I am having my doubts. Maybe a rump session with many five-minute talks is more effective, or  perhaps a combination of the two (five-minute poster ads)?

July 29, 2009 at 9:36 pm

Posted in Conferences

Systematization of Knowledge track at Oakland

I am on the PC for “Oakland” this year (a.k.a. the IEEE Symposium on Security and Privacy).

I have been on the PC of a few conferences in areas outside my immediate expertise and so far I’ve enjoyed the experience. Usually, I am asked to join because they need someone to help them reject carefully review the few crypto/privacy papers that get submitted. Along the way, I get to learn about a different area of research, and about the taste in problems that is prevalent in the other community. Oakland is different because it is nominally about  my area, but the author community is essentially disjoint from the STOC/FOCS/Crypto/Eurocrypt crowd that I hang out with; consequently, the focus of the submissions is very different from that of the papers I am used to reading. I promise to share any (constructive…) comments I have on my experience.

Anyway, my main point: this year’s Oakland will include a new “systematization of knowledge” track. The call for papers says it all:

“The goal of this [track] is to encourage work that evaluates, systematizes, and contextualizes existing knowledge. These papers will provide a high value to our community but would otherwise not be accepted because they lack novel research contributions. Suitable papers include survey papers that provide useful perspectives on major research areas, papers that support or challenge long-held beliefs with compelling evidence, or papers that provide an extensive and realistic evaluation of competing approaches to solving specific problems. … [Submissions] will be reviewed by the full PC and held to the same standards as traditional research papers, except instead of emphasizing novel research contributions the emphasis will be on value to the community. Accepted papers will be presented at the symposium and included in the proceedings.”

Well-written regular papers already include a “systematization of knowledge” component in their related work sections: the obligation to summarize related papers often results in a clean, concise presentation of their high-level ideas. Unfortunately, the quality of the related work section rarely makes or breaks a conference submission, so mileage varies; hence the need for a separate track.

1. If this “systemization” track becomes standard, how will job candidates be viewed if their publication lists contain many such systemization papers? A successful textbook can dramatically increase a researcher’s profile; is the same true of survey papers?
2. What areas of crypto/security/privacy are in direst need of “systemization”? Here a few suggestions for Oakland-appropriate topics:
• definitions of security for anonymous communication systems (e.g. Tor)
• techniques for de-anonymization of sanitized data (hopefully tying together papers published at Oakland, KDD, SIGMOD, VLDB, ICDE, etc)
• Notions of “computation with encrypted data”: homomorphic encryption,predicate encryption, deterministic encryption, order-preserving encryption, etc.
3. Assuming the Oakland systemization track is a success, what other conferences would benefit from adding such a track?

July 9, 2009 at 12:21 pm

Insensitive attributes

And the award for best blog post title of the day goes to…

The post, by Richard Power, reports on an article by Alessandro Acquisti and Ralph Gross,Predicting Social Security numbers from public data”, (faq, PNAS paper) which highlights how one can narrow down a US citizen’s social security number to a relatively small range based only on his or her state and date of birth.

As the Social Security Administration explained (see the elephantine blog post linked above), this was not really a secret; the SSA’s algorithm for generating SSN’s is public. The virtue of the Acquisti-Gross article is in pointing out the security implications of this clearly.

One of the interesting notions the study puts to rest is the distinction between “insensitive” and “sensitive” attributes. Almost anything can be used to identify a person, and once someone has a handle on you it is remarkably easy to predict, or find out, even more.

July 7, 2009 at 10:23 pm

Posted in Data privacy

Tagged with , ,

Michaels Nielsen and Mitzenmacher pointed out a recent post by Harvard’s Stuart Shieber about the “don’t ask, don’t tell” policy that is the implicit norm in scholarly publications, at least in computer science.

“Publishers officially forbid online distribution, authors do it anyway without telling the publishers, and publishers don’t ask them to stop even though it violates contractual obligations. What happens when you refuse to play that game?”

I recommend reading the whole thing. Shieber does post his papers online and, unlike many authors, he makes sure to attach an addendum to any copyright agreements with publishers to ensure that he is not in breach of contract. Publishers almost never complain, he says.

“In retrospect, this may make sense.  Since the contractual modification applies only to a single article by a single author, it is unlikely that anyone looking for copyright clearance would even know that all copyright hadn’t been assigned to the publisher.  And in any case publishers must realize that authors act as if they have a noncommercial distribution license…”

I will consider using the Science Commons addenda for future copyright agreements with publishers. But just to share my own story: When we submitted the final version of the fuzzy extractors paper to SICOMP (SIAM Journal on Computing), Leo Reyzin suggested we explicitly modify SIAM’s copyright agreement to make it a “publication agreement” that confers only non-exclusive publication rights to SIAM. The revised agreement let us retain all other publications rights, including free online distribution via sites of our choice. For my readers’ entertainment, here is our modified agreement with SIAM, which SIAM accepted without comment.

Finally, David Eppstein points out that free online journals make all the hassle so last century.

P.S.: For a great radio show about what people usually mean by “don’t ask, don’t tell”, listen to the June 16 episode of NPR’s Fresh Air, in which Terry Gross interviews Nathaniel Frank, author of Unfriendly Fire.

July 7, 2009 at 4:36 pm

FOCS ’09 crypto accepts

The FOCS 2009 accepted papers are posted, with abstracts. See the chair’s comments here, and other topic-specific discussions here, here and here. Despite some excellent submissions not making it in, there are still some (few!) crypto papers, all of which look interesting. In no particular order:

• Steven Myers and abhi shelat. One bit encryption is complete
• Yi Deng, Vipul Goyal and Amit Sahai. Resolving the Simultaneous Resettability Conjecture and a New Non-Black-Box Simulation Strategy
• Yuval Ishai, Eyal Kushilevitz, Rafail Ostrovsky and Amit Sahai. Extracting Correlations
• Yael Tauman Kalai, Xin Li and Anup Rao. 2-Source Extractors Under Computational Assumptions and Cryptography with Defective Randomness

Quantum crypto papers:

Not exactly crypto, but highly relevant:

• Iftach Haitner. A Parallel Repetition Theorem for Any Interactive Argument
• Falk Unger. A Probabilistic Inequality with Applications to Threshold Direct Product Theorems

Scanning over the FOCS abstracts is hard because of information overload. I will try to read all of the papers on the list above (maybe I’ll even attend the conference) but for now two stand out because they resolve problems I have thought about:

First, Steve and abhi’s surprising paper (not yet available online), which gives a black-box construction of many-bit CCA-secure encryption from 1-bit CCA-secure encryption. This question is tied to very basic notions of authenticity and secrecy in cryptography. For CPA-secure encryption (that is, encryption secure against passive attacks), increasing the message length is straightforward: Goldwasser and Micali showed that encrypting each bit separately works (a classic example of a “hybrid” argument). However, for schemes that must resist active attacks, such as chosen-ciphertext attacks (CCA), bit-by-bit encryption fails miserably. Prior to this paper, there existed (limited) impossibility results, but no evidence that a black-box construction was possible.

Second, André and Iordanis’ paper on optimal strong quantum coin flipping. Information-theoretically secure quantum coin flipping was proved to be impossible in the late 90′s by Mayers and Lo and Chau, using the same techniques that rule out information-theoretically secure oblivious transfer and bit commitment. That result only rules out protocols that produce a very good biased coin (with bias 1/2+o(1)). However, protocols were constantly being proposed (and occasionally broken) which  produced weakly biased coins. This new paper gives a protocol matching Kitaev’s lower bound of $1/\sqrt{2}\approx 0.707$ on the minimal bias. This is not of critical importance in practice, but it does elucidate one of the key phenomena which distinguish quantum from classical cryptographic protocols.