RecSys 2007
I just came back from RecSys ‘07, and it was quite an experience. The great thing about the conference was the balance between academic and industry representatives. Researchers from around the world and representatives from some of the most well known recommender systems out there (Amazon, Netflix, MyStrands, etc).. teaching each other that even after a near decade of research, our understanding of recommender systems is only at it’s beginning.
The paper sessions were split into 4 sessions, each giving a different taste of current research:
- Privacy and trust (including a great presenation by Paolo Massa on his PhD work)
- Algorithms: Collaborative Filtering
- User Issues in Recommender Systems
- Algorithms: Learning
Common themes ranged from grouping similar users to countering profile-injection attacks and using available user information to improve the performance of these systems, each presented from a different point of view and an equal wide range of solutions. The conference proceedings are definitely worth checking out for more information. The demo and poster session was very interesting as well: I saw a demo of Claudio’s poolcasting (see previous post) and so many posters that it is hard to write any kind of short summary of them. Conclusion: the research work on recommender systems is very broad, and highly active around the world.
However, some of the more interesting points came up during the panel sessions, which gave a chance for the industry representatives to have their say, and for ideas to bounce around. Some questions that came up:
- Why are we building recommender systems? Short answer: no consensus. Read any research paper and it will discuss the problem of information overload that users face. Ask a business running a web site and they will say that its a technology that will help them make money (Netflix wouldn’t throw a million into a competition to improve their algorithm if they didn’t think that they would make all that money.. and more.. back!)
- How should we properly evaluate recommender systems? Short answer: no consensus. The industry representatives kept talking about user experience, as if a researcher’s focus on precision/accuracy metrics is somehow off the mark when it comes to actually deploying these systems. They may be right; precision doesn’t measure serendipity or interface issues. On the other hand, shouldn’t the interface and underlying algorithm be separate issues? Also, don’t user ratings reflect the experiences that they are having within the system- and therefore a highly accurate algorithm will be able to provide a good experience? Considering user experience doesn’t exclude the need to be able to measure the performance of algorithms- imagine if user-experience was one of the metrics used when designing TCP!
- Do we know when the recommender system problem has been solved? Short answer: No. (I don’t think anybody even tried answering this question)
- What is the difference between recommendations and search? Personally (until the conference), I thought this matter was solved. Having a conference that is solely dedicated to recommenders seemed to highlight this. Search, or information retrieval, is about helping a user find a target, through queries or single-session model building. A recommender system, on the other hand, is about presenting the user with targets without requiring any form of query, by using a model of that user (and community) that has been built over time. However, there was a lot of discussion about this; the two fields still do overlap quite a bit. In particular, some papers that were presented showed “recommendation systems” that lead the user through a directed dialogue with the system in order to provide results (e.g. find a hotel based on these constraints).
Overall, a great conference. Great to finally be able to put faces to the authors that I’ve been reading in the past few months. But also a 2-day highlight of the inconsistencies and unresolved issues surrounding academic and industry views of recommender systems. Any thoughts?