nlathia.github.io

Home About Research Press & Speaking

SIGIR: Research vs. Reality

Last week, I attended SIGIR 2010 in Geneva, where I presented a paper. The conference has left its traces online: a steady stream of tweets and some great blog-posts (e.g., the Noisy Channel). You can even read comments about the incredible (and, for my part, unexpected) temperatures of some of the conference rooms.

It is always interesting to attend conferences, meet people, and see what research others do to fill their time. However, it was also interesting to attend SIGIR for another reason. Back when the notifications were released, there was a noticeable outcry about the poor quality of the reviewing process. Some authors chose to publish public rebuttals to the reviewers on their blog; others wrote about the unending cycle of complaining that the IR community has spiraled into. “Not Relevant” was born in the wake of all this discussion to give SIGIR rejects an alternative venue to publish work. I wanted to see how this outcry would have been reflected in the conference itself - would the “complainers” not attend? Would the attendees be the subset of authors who are happy with SIGIR? In short: no. There were conversations that I heard replay themselves over and over in the coffee breaks that echoed a variety of problems in the IR-research community. In trying to make sense of the flurry of comments, it seems there are two general areas that need attention:

1. The divergence between reality and research

As the web grows (and goes mobile) the breadth and types of information that people want to access has both grown and changed. However, image, mobile/contextual, and real-time (see tweet) search were largely underrepresented at the conference. A quick look at the conference program shows that the conference still focuses primarily on text and document retrieval. Can people’s information needs be fully captured in a text/document-oriented conference? Is the published research ignoring the latest trends in information access?

2. The divergence between research and reality

The flip side of the coin: SIGIR research has fallen into a methodological rut; the conference is “trapped by a very successful paradigm […where] people can do complex work, the quality of that work can be measured, and progress made.” There are two problems here: first, the community has been hypnotised by its metrics. The current research paradigm encourages researchers to produce “minor-delta” papers (i.e., “we propose an algorithm that improves a baseline by x%”) rather than look at novel problems (see #1 above). However, while doing so, there is no evidence of long-term, cumulative progress in decades of publications. On the other hand, I continue to miss the link between these metrics and the users that they are meant to serve (similar discussions often arise between recommender system researchers). Yes, there are lengthy arguments to be had here: the most important point, now, is that this discussion needs to happen (and happen more frequently).

Lastly, a more general note, based on a question I was asked that is worth pondering on and related, more generally, to all the research we do. Why do we have presentation-based conferences? We take turns standing up, giving our 20-minute summary of our paper, and relegate all meaningful conversation to short coffee breaks. How does this affect the research that we produce?

… will last week become the last SIGIR that I ever attend?