nlathia.github.io

Home About Research Press & Speaking

Blowing (Filter) Bubbles

I must certainly be living inside of a filter bubble: I’ve heard about Eli Pariser’s book so many times that I feel like I’ve read it (which I have not, although I have seen his TED talk). So, when I saw on twitter that he was giving a talk at LSE, I decided to go hear what he has to say. I’m sure that his book has plenty of details that he did not have time to go through in 40 minutes, but I hope that (he feels) he got his main points across.

If you are not familiar with his argument, here it is in a nutshell (if you are, skip down for some thoughts!):

Imagine a world where all the news you see is defined by your salary, where you live, and who your friends are. Imagine a world where you never discover new ideas. And where you can’t have secrets.

Welcome to 2011.

Google and Facebook are already feeding you what they think you want to see. Advertisers are following your every click. Your computer monitor is becoming a one-way mirror, reflecting your interests and reinforcing your prejudices.

The internet is no longer a free, independent space. It is commercially controlled and ever more personalized. In this talk, Eli Pariser will reveal how this hidden web is starting to control our lives – and shows what we can do about it.

Fear-mongering aside (although that is a point I feel quite strongly about), the difference between online personalization and previous media, he explains, amounts to three points. The filter bubbles are:

  • Unique: everyone has their own; there is no common audience anymore.</li>
  • Invisible: “we don’t know on what basis it works, and what the editorial viewpoint is.”
  • Passive: we do not have a choice as to whether to participate; it is increasingly hard to escape them.

Moreover, there are three key problems that emerge. Here they are, with some short quotes that I was able to throw down during the presentation:

  • The Distortion Problem: “we are no longer aware [of the alternative point of view]” … “we all get cranky when something appears that challenges what we believe in.”
  • The Psychological Equivalent of Obesity: we struggle to balance between our short term compulsive habits (e.g., watching Iron Man) vs. our long term aspirational selves (e.g., watching Holocaust Documentaries); we want to be fit, but we also want to eat cake.
  • A Matter of Control: “choice is a function of understanding what our options are” …code doesn’t have ethics embedded into it or any editorial values. In fact, “technology is neither good nor evil nor is it neutral (Kranzberg).”

There are many reasons why I disagree with what he proposes, not least because it is based on anecdotal evidence rather than truly measured. So, here is a quick response. You can call it my recliner of rage (à la Bernard).

Ascribing pre-existing problems to personalization

Two of the three points (invisible, passive) existed in broadcast media well before the dawn of the Internet. In fact, printed media is even more invisible (I don’t know the editor himself) and passive (I could not blog about it). The point about uniqueness, to me, is simply not a problem. It is what makes the tweets from those I follow interesting; it is what makes talk at lunch different; it is what makes a conversation at the pub more vibrant: we have each had our own (not necessarily different) view of a story, and sharing it becomes a matter of debate rather than report.

Similarly, I don’t see how the three problems (distortion, psychological obsesity, control) come as a consequence of personalization, rather than a consequence of the pre-existing high-speed information highways that we now all surf. How many of us have already spent more time watching YouTube instead of reading Dostoyevsky (no personalization required)? “Hide your kids, hide your wife [cause nobody’s reading Dostoyevsky down here].”

Why he is unfair to computers (algorithms/recommender systems)

He’s right; everyone knows that “if you liked this, you might like that” is the basis for personalization. In techno-parlance, this is collaborative filtering. Computers eat up the data, perform complex statistical operations (which, arguably, are well beyond the this/that paradigm by now), and spit out recommendations. From that point of view, why not take the stage to scold the audience, who have been busy clicking away on the wrong things? The algorithms are no more than a reflection of our collective self: more on that in the last section below.

Moreover, research led by Xavier Amatriain shows that we are also quite bad at saying what we do and do not like. Those researching collaborative filtering struggle with the fact that often, we click on things we do not like and rate a movie 1* on a Wednesday while we would have given it 3* on Friday. In fact, the wider problem here is that taste and preference is an ill-defined matter, and computers are notoriously bad at computing anything that is ill-defined. I’m saying that recommender systems don’t just give you more of the same; they are terrible at doing many things (particularly, accounting for how my preference may change or grow). Case in point: I’ve discovered new music by sharing with others (when we were interns): we shared many preferences (like work, recommender systems, computer science, the “this”) but also were different on many other fronts (the “that”). Without collaborative filtering, there would be many people whose preferences would never influence me. And sometimes, even randomness becomes a part of the algorithm (disclosure: my research!).

Why he is unfair to how we use the web

In terms of how we use these systems, I very much agree with what Greg Linden has blogged about the topic. People often use personalized systems to discover things that they otherwise would never find. There are countless bands (via last.fm), books (via amazon), and people (via twitter) that I have found that–had it not been for my personalized feed–would have remained well outside of the realm of my awareness. You want an anecdote to support that? I’m about to start reading The Possibility of an Island because I bought an album by Iggy Pop that was recommended to me online, and Iggy himself was inspired by reading the book. And you, who probably only share the fact that you are somewhat interested in Pariser’s work (he becomes the “this” here) now know about Houellebecq and Iggy (the “that”).

Why he is unfair to us (as humans)

A key quote that Pariser presented was one by Zuckerberg. It was something along the lines of sometimes a squirrel dying on your street being more relevant to you than a person dying in Africa. As I commented at the end of the talk, I’m sure that it would be hard for anyone to argue that a squirrel is more important than a person (maybe: if the squirrel was being run over by the tank of an invading army?). But hidden behind the comments on this quote is the idea that an objective account of importance exists, and I failed to see why this is the case.

Of course, Pariser said: “there is a war going on.” And, I would not disagree when he says that people should know about it. But, really? There are actually many wars going on. Which war is relevant? At what point does one’s subjective account of importance cause you to think that recommender systems are a failure?

Trivial things happen online. Get over it. That’s what I say to everyone who still thinks that Twitter is about lunch sandwiches, while they ask me about the weather or delays on the tube as we ride the lift. That doesn’t mean that we lack an innate curiosity to seek out what is happening around us and to form opinions about the events that are shaping our times. It does not mean that we, ourselves, are unable to change and instead relish lonely echo chambers, rather than the greater society that we live in.

Moreover, claiming that we are being duped by filter bubbles assumes that once a great technology is built (like Google), we will switch our brains off. As if, when the car was invented, we amputated our legs: “we don’t need these anymore!”

The vision of a world of filter bubbles makes claims about the world we ought to live in, rather than describing and explaining people’s behaviour. As well as being based on many (false) assumptions about technology, it relies heavily on the idea that there is an objective standard of importance that we ought to prescribe to. Eli: we should be learning about people’s behaviour rather than making normative claims about what we ought to find important.

p.s. If he is reading this post: Hi Eli. Great talk! I feel you skirted my question (about the last section above). Would love your thoughts. But, if you’re looking to escape your bubble, go read something different.

p.p.s. If the LSE organizers are reading: Thanks for taking my question (eventually). Also, I’m sure twitter would have been more alight with activity if there was some wifi available for the audience!

Tweets about the event here.