nlathia.github.io

Home About Research Press & Speaking

Intelligent Techniques for Web Personalization

Yesterday, I attended the 7th IJCAI Workshop on Intelligent Techniques for Web Personalization & Recommender Systems, in Pasadena. The IJCAI-2009 technical program will start on Tuesday. Here’s a summary of the sessions during the day:

1. Modeling and Personalization Strategies

Stephanie Rosenthal (Carnegie Mellon) presented her work on online selection of recommender algorithms; she proposed an online switching algorithm (that moved between global/domain-specific recommenders) based on temporal performance. Interestingly, she used 3 datasets, including the recently released Yahoo! Webscope data. Her work is related to the paper I am presenting next week at SIGIR.

Fabian Bonhert (Monash U, Australia) presented a personalised technology for guiding users through a museum. Based on manually collected data, he introduced a model (combining spatial information and item-similarity information) for predicting how users move in this context.

Paolo Viappiani (U of Toronto) presented a model to provide a set of recommendations based on regret; the purpose being to maximise the utility of a list of recommendations (for example, a list of apartments to look at).

2. Enabling Technologies

Jinhyuk Choi (KAIST, S. Korea) discussed his work on the analysis of contextual web usage patterns. In trying to answer the question “what contextual factors exert influence on the usage logs?” he highlighted the difference between navigational and content pages, casual tasks and careful searching, and the (explictly requested) credibility that users attributed to different pages.

Tarik Hadzic (UC Cork, Ireland) presented his work on representing catalogues as multi-valued decision diagrams, including scalable techniques for merging nodes in the graph in order to reduce their size.

Eduardo Eisman (U of Granada, Spain) was exploring a widespread problem on the Internet: information is unstructured and diverse without homogeneous structure. The problem is compounded by the fact that search engines return too many results. His proposed and evaluated a context sensitive, multi-lingual, real-time virtual assistant to help users search for information on a website.

3. Invited Talk

The invited talk was given by Barry Smyth (director of the CLARITY centre in Dublin). His talk was about “the web’s killer app” search. Barry is a great speaker - his talk was roughly split into three parts:

The State of Search. Barry discussed interesting facets of web search. For example, 52% of searches are ‘failures,’ where users need to shuffle/reword their query (try googling for “umap” the conference on user modeling adaptation and personalization - and see where the conference is ranked). He also reported on studies of people’s searching habits - while 57% of searches are for discovery, the remaining 43% are for recovery - regaining access to previously visited resources.

Context in Search. He then outlined the growing interesting in applying context to search. Current work is roughly categorised into augment search with user profiles (e.g. this paper), document context (e.g. the targetted ads from google adsense), and task context (e.g. searching for ‘pizza’ on the iPhone and finding pizza restaurants around you).

What is Missing from Search Context? Barry then discussed his work on the one aspect of search that isn’t represented in the above - social context - or what people around us are searching for. In fact, 2/3rds of what we search for is likely to have already been found by a friend; the aim of social search is to harvest this knowledge and use it to improve our results. He introduced HeyStaks, a browser plugin that does just this - allowing users to categorize and share their searches and search results. He then explored a wide range of aspects of this plugin, ranging from the social graph of collaboration, to how the plugin builds a meta-search engine (blending results from bing, google, etc), and the inbuilt reputation system to counteract spam. He ended on an interesting note - that there is much more to improving user experience than tweaking an algorithm (a comment on the Netflix prize?)

4. Recommender Algorithms + Hybrid Recommenders

Dietmar Jannach (Dortmund U, Germany) presented a very interesting user study on the business end of a recommender system - how much does introducing a recommender system to a mobile gaming shop shift user shopping behaviour? The work involved extending a game selling app with a variety of recommender systems (popularity, collaborative filtering, etc), and measuring how it affected sales. 150,000 users participated in the study (wow!). They measured between 3.2-3.6% increases in sales - the problem this highlights is when to apply what recommender algorithm.

Bamshad Mobasher (DePaul University, Chicago) discussed some work of his students on adapting kNN for tag recommendations in folksonomies. Suggesting tags for users to apply to content has the interesting effect of reducing the effort and errors in tagging, and encouraging tags to be reused. The algorithm he presented outperforms the FolkRank algorithm!

My talk followed, based on a follow up to the SIGIR ‘09 paper that reports on the work I participated in during my internship at Telefonica. The last talk was cancelled.

5. Wrap-Up/Discussion

We finished the workshop with a discussion on the themes that emerged during the day. By this point, my laptop was dead and so I did not take notes (if you did, and would like to post on this blog about it, then please get in touch). However, we discussed evaluation metrics, user studies, context, …

Overall, the workshop was very interesting and a big thanks is due to the organisers: Sarabjot Singh Anand (Warwick University), Bamshad Mobasher (DePaul University), Alfred Kobsa (University of California Irvine) and Dietmar Jannach (Technical University of Dortmund).