Nudging in the Recommendations
Recommender systems currently work under a very specific mantra: they offer you stuff that you may like by doing computations on data that reflects what you like (e.g., star ratings), who you are (e.g., based on your social connectivity), or how you have behaved in the past (clicks, queries). One of the attention-grabbing “problems” that emerged from the way these algorithms are applied (for example, on social networks) is that personalisation will ultimately lead to echo chambers, where you only consume the content that those who are exactly like you do; the algorithm imprisons you inside an inescapable “bubble.” While it seems that we are finally moving beyond the “conventional wisdom” of the existence of these echo chambers (see the facebook study directly here), one point remains: recommender systems still do not (by design) help you to change, achieve goals, or actively work on becoming who you want to be.
However there is plenty of work on “nudging” people with technological means in order to change their behaviour. Most of them rely on turning invisible data into some kind of feedback and adding in a bit of fun: from turning stairs into a piano (to nudge you off that escalator), reflective tables (that light up based on how much each person is talking), community-level electricity consumption awareness, and mobile apps like UbiGreen and UbiFit. The magic of it all seems to be that, once people become aware of what they are doing, and feedback is placed in an appropriate context (e.g., with positive/negative emotional feedback in the form of smileys?) then behaviour starts to change.
Why don’t recommender systems build in aspects of nudging and feedback in order to help their users achieve their own goals? Even if it’s something as banal as wanting to learn more about a particular genre of music, to becoming healthier or explore restaurants. There are, instead, humorous examples of the opposite in action: take the “TiVo thinks I’m gay” example; there are people who ended up becoming frustrated by their recommendations since they didn’t feel it reflected who they are.
I think that these two (historically separate) fields of research have a lot to say to each other. Persuasive-style research seems to lack means of long-term engagement with people: How can in situ feedback and displays be turned into long-term interfaces that can promote people to change their behaviour and not slip back into their old routine once the novelty of the nudge has worn off? Recommender systems have been very successful in this area; they are routinely used online to increase engagement and augment customer retention. How can you tailor the way you nudge to who you are nudging? Maybe what works to engage someone into opting for more sustainable travel habits does not work for someone else: recommender systems are based on automatically learning this per-person tailoring. On the other hand, how can recommender systems turn into more than just “content filters?” Would a personalised nudge work?
Background
I’ve recently started a new post at Cambridge on a project (with the fancy acronym UBHave) called Ubiquitous and Social Computing for Positive Behaviour Change, and have therefore been spending time thinking about how the research I’ve done in the past may be applicable here as well. Also, tonight I went to a very interesting talk by Yvonne Rogers about her work on behaviour change: I owe all the nudging examples linked above to her great talk.