Skip to content

Behind the Filter Bubble: Hidden Maps of the Internet

A small corner of the world of search took another step toward personalization today, as Bing moved to give users the option to personalize their results by drawing on data from their Facebook friends:

Research tells us that 90% of people seek advice from family and friends as part of the decision making process. This “Friend Effect” is apparent in most of our decisions and often outweighs other facts because people feel more confident, smarter and safer with the wisdom of their trusted circle.

Today, Bing is bringing the collective IQ of the Web together with the opinions of the people you trust most, to bring the “Friend Effect” to search. Starting today, you can receive personalized search results based on the opinions of your friends by simply signing into Facebook. New features make it easier to see what your Facebook friends “like” across the Web, incorporate the collective know-how of the Web into your search results, and begin adding a more conversational aspect to your searches.

The announcement almost perfectly coincides with the release of Eli Pariser’s book The Filter Bubble, which argues that “as web companies strive to tailor their services (including news and search results) to our personal tastes, there’s a dangerous unintended consequence: We get trapped in a “filter bubble” and don’t get exposed to information that could challenge or broaden our worldview.” I have earlier worried about both excessive personalization and integration of layers of the web (such as social and search, or carrier and device). I think Microsoft may be reaching for one of very few strategies available to challenge Google’s dominance in search. But I also fear that this is one more example of the “filter bubble” Pariser worries about.

Like Evgeny Morozov, Pariser persuasively demonstrates the downside of “community building” on the web; filter bubbles can be astonishingly insular. It’s an important message. Oren Bracha and I have shown the critical importance of search technology in affecting both users’ autonomy and possibilities for democracy. And as I noted last summer:

Heraclitus wrote that “for the waking there is one world, and it is common; but sleepers turn aside each one into a world of his own.” In our age of fragmented lifeworlds, narrowcasting, and personalization, internet searchers are increasingly like Heraclitus’s sleepers. They will increasingly consume customized media on the persons and events they take an interest in. Many will unwittingly enter a media environment shaped in ways they can’t understand. While some authors have lamented the effects of the “Daily Me” on politics, and others have noted the Kafkaesque implications of black box databases, few have considered the intersection of these trends. They threaten to make a scholarly understanding of media consumption difficult, as we have less and less objective sense of what’s really being presented as choices.

It’s a real tribute to Pariser’s persistence that he convinced Silicon Valley engineers to acknowledge and grapple with this reality. We’ll need many more thinkers like him to wake us from our technological somnambulism.

On the other hand, perhaps the integration of social networking into search can make search results a bit more understandable to users. Pariser suggests that even people inside Google can’t fully understand how its algorithms result in a given information environment for a user of its services:

Even if you’re not logged into Google, for example, an engineer told me there are 57 signals that the site uses to figure out who you are: whether you’re on a Mac or PC or iPad, where you’re located when you’re Googling, etc. And in the near future, it’ll be possible to “fingerprint” unique devices, so that sites can tell which individual computer you’re using. . . .

As Google engineer Jonathan McPhie explained to me, [personalization is] different for every person – and in fact, even Google doesn’t totally know how it plays out on an individual level. At an aggregate level, they can see that people are clicking more. But they can’t predict how each individual’s information environment is altered.

In general, the things that are most likely to get edited out are the things you’re least likely to click on. Sometimes, this can be a real service – if you never read articles about sports, why should a newspaper put a football story on your front page? But apply the same logic to, say, stories about foreign policy, and a problem starts to emerge. Some things, like homelessness or genocide, aren’t highly clickable but are highly important.

If people have to choose between algorithmic and friend-based personalization, the latter may be more transparent than the former. On the other hand, the Bing-Facebook combine isn’t rushing to make its own methods public, so maybe it’s a wash.

X-Posted: Concurring Opinions.