Filter Bubble

From P2P Foundation
Jump to navigation Jump to search

Concept

From the Wikipedia:

"A filter bubble is a result of a personalized search in which a website algorithm selectively guesses what information a user would like to see based on information about the user (such as location, past click behavior and search history) and, as a result, users become separated from information that disagrees with their viewpoints, effectively isolating them in their own cultural or ideological bubbles.The choices made by the algorithms are not transparent. Prime examples are Google Personalized Search results and Facebook's personalized news stream. The term was coined by internet activist Eli Pariser in his book by the same name; according to Pariser, users get less exposure to conflicting viewpoints and are isolated intellectually in their own informational bubble. Pariser related an example in which one user searched Google for "BP" and got investment news about British Petroleum while another searcher got information about the Deepwater Horizon oil spill and that the two search results pages were "strikingly different". The bubble effect may have negative implications for civic discourse, according to Pariser, but there are contrasting views suggesting the effect is minimal and addressable." (https://en.wikipedia.org/wiki/Filter_bubble)

Discussion

Research indicates the Filter Bubble 'echo chamber' is a myth

Helen Margetts:

"This explosive rise, non-normal distribution and lack of organization that characterizes contemporary politics can explain why many political developments of our time seem to come from nowhere. It can help to understand the shock waves of support that brought us the Italian Five Star Movement, Podemos in Spain, Jeremy Corbyn, Bernie Sanders, and most recently Brexit and Trump – all of which have campaigned against the “establishment” and challenged traditional political institutions to breaking point.

Each successive mobilization has made people believe that challengers from outside the mainstream are viable – and that is in part what has brought us unlikely results on both sides of the Atlantic. But it doesn’t explain everything.

We’ve had waves of populism before – long before social media (indeed many have made parallels between the politics of 2016 and that of the 1930s). While claims that social media feeds are the biggest threat to democracy, leading to the “disintegration of the general will” and “polarization that drives populism” abound, hard evidence is more difficult to find.

The mechanism that is most often offered for this state of events is the existence of echo chambers or filter bubbles. The argument goes that first social media platforms feed people the news that is closest to their own ideological standpoint (estimated from their previous patterns of consumption) and second, that people create their own personalized information environments through their online behaviour, selecting friends and news sources that back up their world view.

Once in these ideological bubbles, people are prey to fake news and political bots that further reinforce their views. So, some argue, social media reinforces people’s current views and acts as a polarizing force on politics, meaning that “random exposure to content is gone from our diets of news and information”. Really? Is exposure less random than before? Surely the most perfect echo chamber would be the one occupied by someone who only read the Daily Mail in the 1930s – with little possibility of other news – or someone who just watches Fox News? Can our new habitat on social media really be as closed off as these environments, when our digital networks are so very much larger and more heterogeneous than anything we’ve had before?

Research suggests not. A recent large-scale survey (of 50,000 news consumers in 26 countries) shows how those who do not use social media on average come across news from significantly fewer different online sources than those who do. Social media users, it found, receive an additional “boost” in the number of news sources they use each week, even if they are not actually trying to consume more news. These findings are reinforced by an analysis of Facebook data, where 8.8 billion posts, likes and comments were posted through the US election.

Recent research published in Science shows that algorithms play less of a role in exposure to attitude-challenging content than individuals’ own choices and that “on average more than 20% of an individual’s Facebook friends who report an ideological affiliation are from the opposing party”, meaning that social media exposes individuals to at least some ideologically cross-cutting viewpoints: “24% of the hard content shared by liberals’ friends is cross-cutting, compared to 35% for conservatives” (the equivalent figures would be 40% and 45% if random).

In fact, companies have no incentive to create hermetically sealed (as I have heard one commentator claim) echo chambers. Most of social media content is not about politics (sorry guys) – most of that £5 billion advertising revenue does not come from political organizations. So any incentives that companies have to create echo chambers – for the purposes of targeted advertising, for example – are most likely to relate to lifestyle choices or entertainment preferences, rather than political attitudes.

And where filter bubbles do exist they are constantly shifting and sliding – easily punctured by a trending cross-issue item (anybody looking at #Election2016 shortly before polling day would have seen a rich mix of views, while having little doubt about Trump’s impending victory).

And of course, even if political echo chambers were as efficient as some seem to think, there is little evidence that this is what actually shapes election results. After all, by definition echo chambers preach to the converted. It is the undecided people who (for example) the Leave and Trump campaigns needed to reach. And from the research, it looks like they managed to do just that. A barrage of evidence suggests that such advertising was effective in the 2015 UK general election (where the Conservatives spent 10 times as much as Labour on Facebook advertising), in the EU referendum (where the Leave campaign also focused on paid Facebook ads) and in the presidential election, where Facebook advertising has been credited for Trump’s victory, while the Clinton campaign focused on TV ads. And of course, advanced advertising techniques might actually focus on those undecided voters from their conversations. This is not the bottom-up political mobilization that fired off support for Podemos or Bernie Sanders. It is massive top-down advertising dollars.

Ironically however, these huge top-down political advertising campaigns have some of the same characteristics as the bottom-up movements discussed above, particularly sustainability. Former New York Governor Mario Cuomo’s dictum that candidates “campaign in poetry and govern in prose” may need an update. Barack Obama’s innovative campaigns of online social networks, micro-donations and matching support were miraculous, but the extent to which he developed digital government or data-driven policy-making in office was disappointing. Campaign digitally, govern in analogue might be the new mantra." (https://www.weforum.org/agenda/2016/12/of-course-social-media-is-transforming-politics-but-it-s-not-to-blame-for-brexit-and-trump)


Book

* Eli Pariser. The Filter Bubble: How the New Personalized Web Is Changing What We Read and How We Think. Penguin, 2012


Interview

Via Amazon [1]:

"Q: What is a “Filter Bubble”?

A: We’re used to thinking of the Internet like an enormous library, with services like Google providing a universal map. But that’s no longer really the case. Sites from Google and Facebook to Yahoo News and the New York Times are now increasingly personalized – based on your web history, they filter information to show you the stuff they think you want to see. That can be very different from what everyone else sees – or from what we need to see.

Your filter bubble is this unique, personal universe of information created just for you by this array of personalizing filters. It’s invisible and it’s becoming more and more difficult to escape.


Q: I like the idea that websites might show me information relevant to my interests—it can be overwhelming how much information is available I already only watch TV shows and listen to radio programs that are known to have my same political leaning. What’s so bad about this?

A: It’s true: We’ve always selected information sources that accord with our own views. But one of the creepy things about the filter bubble is that we’re not really doing the selecting. When you turn on Fox News or MSNBC, you have a sense of what their editorial sensibility is: Fox isn’t going to show many stories that portray Obama in a good light, and MSNBC isn’t going to the ones that portray him badly. Personalized filters are a different story: You don’t know who they think you are or on what basis they’re showing you what they’re showing. And as a result, you don’t really have any sense of what’s getting edited out – or, in fact, that things are being edited out at all.


Q: How does money fit into this picture?

A: The rush to build the filter bubble is absolutely driven by commercial interests. It’s becoming clearer and clearer that if you want to have lots of people use your website, you need to provide them with personally relevant information, and if you want to make the most money on ads, you need to provide them with relevant ads. This has triggered a personal information gold rush, in which the major companies – Google, Facebook, Microsoft, Yahoo, and the like – are competing to create the most comprehensive portrait of each of us to drive personalized products. There’s also a whole “behavior market” opening up in which every action you take online – every mouse click, every form entry – can be sold as a commodity.


Q: What is the Internet hiding from me?

A: As Google engineer Jonathan McPhie explained to me, it’s different for every person – and in fact, even Google doesn’t totally know how it plays out on an individual level. At an aggregate level, they can see that people are clicking more. But they can’t predict how each individual’s information environment is altered.

In general, the things that are most likely to get edited out are the things you’re least likely to click on. Sometimes, this can be a real service – if you never read articles about sports, why should a newspaper put a football story on your front page? But apply the same logic to, say, stories about foreign policy, and a problem starts to emerge. Some things, like homelessness or genocide, aren’t highly clickable but are highly important.


Q: Which companies or Websites are personalizing like this?

A: In one form or another, nearly every major website on the Internet is flirting with personalization. But the one that surprises people most is Google. If you and I Google the same thing at the same time, we may get very different results. Google tracks hundreds of “signals” about each of us – what kind of computer we’re on, what we’ve searched for in the past, even how long it takes us to decide what to click on – and uses it to customize our results. When the result is that our favorite pizza parlor shows up first when we Google pizza, it’s useful. But when the result is that we only see the information that is aligned with our religious or social or political beliefs, it’s difficult to maintain perspective.


Q: Are any sites being transparent about their personalization?

A: Some sites do better than others. Amazon, for example, is often quite transparent about the personalization it does: “We’re showing you Brave New World because you bought 1984.” But it’s one thing to personalize products and another to personalize whole information flows, like Google and Facebook are doing. And very few users of those services are even marginally aware that this kind of filtering is at work.


Q: Does this issue of personalization impact my privacy or jeopardize my identity at all?

A: Research psychologists have known for a while that the media you consume shapes your identity. So when the media you consume is also shaped by your identity, you can slip into a weird feedback loop. A lot of people see a simple version of this on Facebook: You idly click on an old classmate, Facebook reads that as a friendship, and pretty soon you’re seeing every one of John or Sue’s posts.

Gone awry, personalization can create compulsive media – media targeted to appeal to your personal psychological weak spots. You can find yourself eating the equivalent of information junk food instead of having a more balanced information diet.


Q: You make it clear that while most Websites’ user agreements say they won’t share our personal information, they also maintain the right to change the rules at any time. Do you foresee sites changing those rules to profit from our online personas?

A: They already have. Facebook, for example, is notorious for its bait-and-switch tactics when it comes to privacy. For a long time, what you “Liked” on Facebook was private, and the site promised to keep it that way. Then, overnight, they made that information public to the world, in order to make it easier for their advertisers to target specific subgroups.

There’s an irony in the fact that while Rolex needs to get Tom Cruise’s permission to put his face on a billboard, it doesn’t need to get my permission to advertise my endorsement to my friends on Facebook. We need laws that give people more rights in their personal data.


Q: Is there any way to avoid this personalization? What if I’m not logged into a site?

A: Even if you’re not logged into Google, for example, an engineer told me there are 57 signals that the site uses to figure out who you are: whether you’re on a Mac or PC or iPad, where you’re located when you’re Googling, etc. And in the near future, it’ll be possible to “fingerprint” unique devices, so that sites can tell which individual computer you’re using. That’s why erasing your browser cookies is at best a partial solution—it only partially limits the information available to personalizers.

What we really need is for the companies that power the filter bubble to take responsibility for the immense power they now have – the power to determine what we see and don’t see, what we know and don’t know. We need them to make sure we continue to have access to public discourse and a view of the common good. A world based solely on things we “Like” is a very incomplete world.

I’m optimistic that they can. It’s worth remembering that newspapers weren’t always informed by a sense of journalistic ethics. They existed for centuries without it. It was only when critics like Walter Lippman began to point out how important they were that the newspapers began to change. And while journalistic ethics aren’t perfect, because of them we have been better informed over the last century. We need algorithmic ethics to guide us through the next.


Q: What are the business leaders at Google and Facebook and Yahoo saying about their responsibilities?

A: To be honest, they’re frustratingly coy. They tend to frame the trend in the passive tense: Google’s Eric Schmidt recently said “It will be very hard for people to watch or consume something that has not in some sense been tailored for them,” rather than “Google is making it very hard…” Mark Zuckerberg perfectly summed up the tension in personalization when he said “A squirrel dying in your front yard may be more relevant to your interests right now than people dying in Africa.” But he refuses to engage with what that means at a societal level – especially for the people in Africa.


Q: Your background is as a political organizer for the liberal Website MoveOn.org. How does that experience inform your book?

A: I’ve always believed the Internet could connect us all together and help create a better, more democratic world. That’s what excited me about MoveOn – here we were, connecting people directly with each other and with political leaders to create change.

But that more democratic society has yet to emerge, and I think it’s partly because while the Internet is very good at helping groups of people with like interests band together (like MoveOn), it’s not so hot at introducing people to different people and ideas. Democracy requires discourse and personalization is making that more and more elusive.

And that worries me, because we really need the Internet to live up to that connective promise. We need it to help us solve global problems like climate change, terrorism, or natural resource management which by their nature require massive coordination, and great wisdom and ingenuity. These problems can’t be solved by a person or two – they require whole societies to participate. And that just won’t happen if we’re all isolated in a web of one."

Video

"As web companies strive to tailor their services (including news and search results) to our personal tastes, there's a dangerous unintended consequence: We get trapped in a "filter bubble" and don't get exposed to information that could challenge or broaden our worldview. Eli Pariser argues powerfully that this will ultimately prove to be bad for us and bad for democracy."