Centralized Web

From P2P Foundation
Jump to navigation Jump to search


History

Discussion on 'The Rise of the Centralized Web', By Chelsea​ ​Barabas, Neha​ ​Narula and Ethan​ ​Zuckerman:

"Between 1989 and now, the World Wide Web transformed from an obscure system for publishing technical notes to a basic infrastructure of commerce, learning and social interaction. In celebrating the rise of the web and the ways it now provides interpersonal connection for billions of people, we often forget that the web has undergone dramatic organizational and infrastructural shifts. These shifts force us to reexamine one of our most cherished hopes for the web: that it could be a space for civic debate and social inclusion, opening previously closed conversations to a broader set of citizens.

When Tim Berners-Lee designed and implemented the hypertext transport protocol, he was designing a system for use by physics researchers, mostly academics who had access to university computing resources. In pre-web days, academic computing users had accounts on shared computers, and the social norms of the time meant that users had a great deal of control over the computing resources they used. By the early 90’s, the emergence of the open web helped normalize this idea of distributed control of content. While thousands of people had published online using FTP, Gopher, Archie and WAIS (Wide Area Information Server), the web's increased usability meant that millions of people could then publish their own webpages.

As the web gained more widespread adoption, legal scholars and online advocates began to conceive of it as an important new battleground for preserving core social values, such as freedom of expression. For them, that battleground was situated squarely in the technical underpinnings of the web itself. Particular emphasis was placed on the​ ​structural​ ​factors​ ​that​ ​helped​ ​to​ ​preserve​ ​individual​ ​freedoms​ ​online, particularly against the encroachment of powerful actors such as the State. This perspective is well illustrated in the writings of early web advocates, such as John Perry Barlow’s A Declaration of the Independence of Cyberspace, in which the author proclaimed, “We are creating a world where anyone, anywhere may express his or her beliefs, no matter how singular, without fear of being coerced into silence or conformity.

Your legal concepts of property, expression, identity, movement, and context do not apply to us. They are all based on matter, and there is no matter here.”

Two decades after its publication, the tone of Barlow’s Declaration rings a bit naive. The web is a far more complicated place than the egalitarian, immaterial utopia Barlow depicts. But in the 1990’s, writing like this deeply resonated with early advocates of cyberspace, who saw the technical architecture of the web as a powerful vehicle for achieving transformational social change through the free exchange of ideas. According to media historian Fred Turner, many of these ideas were an extension of left leaning counterculture movements from the 60’s and 70’s, which sought to replace hierarchical social structures with new models of governance based on self-sufficiency and shared consciousness, rather than the laws of the ruling class.

Legal scholar Lawrence Lessig has perhaps most clearly articulated the idea that code itself is a mechanism for negotiating power and control over speech online. He, along with many other web enthusiasts, celebrated and defended the development of open protocols such as TCP/IP, which he argued deeply impacted the “regulability” of the Internet. TCP/IP is the protocol used to exchange data across a network, without knowing the content of the data or who the sender and recipients are in real life. According to Lessig, TCP/IP is a great example of how we are able to use code to build in strong protections for important values such as freedom of speech–the easier it is to set up point-point communication between parties, the harder it is to regulate and limit the exchange of certain kinds of data.

Similarly, HTTP was lauded as a critical component of the web’s open and distributed structure, because it enabled anyone with a web server to publish their own content, which (hypothetically) anyone with a web browser could then find. There was no need to ask permission and few possible consequences for actions taken for sharing ideas online. In theory, one could reach the whole world through the World Wide Web. In reality, that narrative was again oversimplified. The early web was quite chaotic and hard for users to navigate. The organization of content was highly distributed. It was assumed that users would be both publishers and readers, that each person would have a homepage composed of links that she authored and used to document useful resources and shortcuts across the web.

This distributed wayfinding architecture made it difficult to find resources online, especially as more and more people started making their own websites. Moreover, users needed a baseline of technical know-how in order to set up and run their own server for publishing. This created a significant barrier to entry for new users to participate fully in the dream of the open web.

Even​ ​though​ ​the​ ​Internet​ ​was​ ​built​ ​on​ ​distributed​ ​protocols,​ ​the​ ​web needed​ ​to​ ​consolidate​ ​around​ ​a​ ​few​ ​curated​ ​service​ ​platforms​ ​in​ ​order​ ​to​ ​become practical​ ​for​ ​everyday​ ​people​ ​to​ ​use.​ ​This​ ​trend​ ​towards​ ​consolidation​ ​has serious​ ​implications​ ​for​ ​two​ ​key​ ​functions​ ​of​ ​the​ ​web–publishing​ ​and​ ​discovery of​ ​content.

Due to improvements in usability today’s web is much easier to use and open to vastly more people, but centers on a small number of points of control. The owners of those points of control–primarily large, for-profit, publicly traded companies–comprise a new class of elite power players, ones that have enormous influence on our online interactions. And because so many of our interactions–commercial, interpersonal and civic–are mediated online, we have inadvertently given these companies a great deal of control over our political lives and civic discourse. This trend is reflected in the growing number of user petitions for sites like Facebook to stop censoring content and banning the accounts of historically marginalized voices." (http://dci.mit.edu/assets/papers/decentralized_web.pdf)

Discussion

Risks Posed by the Centralized Web

By Chelsea​ ​Barabas, Neha​ ​Narula and Ethan​ ​Zuckerman:

"It’s undeniable that the rise of large publishing platforms like Facebook, Twitter and Medium has enabled a significantly more user-friendly web. But at what cost? Today just two websites, Facebook and Google, account for 81% of all incoming traffic to online news sources in the U.S. Over the last two years Facebook has overtaken 10 Google as the number one source of incoming traffic, and current projections indicate that this trend is likely to continue over the coming years. Google now processes 3.5 billion search queries per day, roughly ten times more than its nearest competitors (Baidu, Yahoo, Microsoft, Yandex). In 2016, Facebook supported an average of over 1.2 billion active users per day. Recent surveys conducted by the Pew Research 12 Center reveal that a clear majority of Facebook and Twitter users (63% on both sites) report using these platforms to access news on current events and other issues beyond the sphere of family and friends.

The rise of social media as a source of news cuts across nearly all demographic groups in the US. For Millennials, Facebook is by far the most dominant source of news on government and politics, on par with television news consumption for the Baby Boomer generation. In light of these trends, it is clear that a small and shrinking 14 number of online platforms will have very significant influence over what media the public consumes on a daily basis. We can understand this influence in terms of two key aspects of online speech: these platforms control what is possible to publish, and they control whether others are likely to discover it. In the following section, we explore specific risks related to the publication and discovery of online speech.


Risk​ ​1:​ ​Top-down,​ ​Direct​ ​Censorship

Users face an increased risk of censorship as our digital publishing ecosystem becomes increasingly consolidated around a few popular platforms. Generally speaking, service platforms controlled by a single company are more prone to top-down censorship and surveillance pressures from government than decentralized alternatives.

In order to stay in business, corporate social media networks which own user data must comply with local laws and regulations related to free speech and censorship. Otherwise, they could face legal repercussions that make it difficult for them to operate in certain jurisdictions. This was the case in the spring of 2016, when Facebook blocked users in Thailand from seeing satirical pages that poked fun at the King and Thai Royal Family. In a notice posted in lieu of the blocked content, Facebook explained that it took down the pages in order to comply with a local Thai law that prohibits defamation of the Royal family. Apparently the junta government has been increasing pressure for sites like Facebook and Line to comply with court orders to block content it deems “a threat to peace and order” in the country. Platform companies face a complex calculus in these cases. If Facebook decides to block pages on the basis of lese majeste, they will set a dangerous precedent, and may end up being forced to block more content by subsequent governments. On the other hand, companies like Facebook could decide to ignore local rulings and simply ensure they have no assets or personnel in those countries so those laws cannot be enforced.

Additionally, there have been many instances in which social media platforms like Twitter and Facebook have come under attack by national governments. This is particularly common during politically sensitive times, such as during the 2009 presidential election in Iran and amidst the outbreak of Arab Spring protests in Tunisia in 2011, when the government used malware to steal the passwords and take over the accounts of users who were critical of the Tunisian government.

But these issues are not limited to far distant lands where political revolution is bubbling up just under the surface. Just this August, a group of activists submitted a public letter to Facebook CEO Mark Zuckerberg, lobbying for a new “anti-censorship policy” after it was revealed that the platform, at the request of law enforcement, had taken down videos of a Baltimore woman who was shot and killed by the police. This incident was not the first time that Facebook has taken down content related to police killings in the U.S. Earlier this year, a video capturing the police shooting death of Philando Castile was removed from the platform in what was later described as a “glitch.” Activist groups have contested this description, claiming that the police had a role removing the footage from Castile’s girlfriend’s account as it began to go viral across the Internet.

One reason these networks are susceptible to this type of surveillance and control is because they are required to comply with the local laws and regulations of the countries where their users reside. However, another reason is because of the way they have chosen to architect their systems. Unlike in distributed systems like BitTorrent, platforms like Facebook, YouTube and Twitter can delete content legal authorities determine to be offensive. This causes two problems: First, because these companies want to maintain good relationships with governments, and governments can make it very difficult to access these sites from within their borders, the companies will comply with censorship requests. Since the networks are controlled by companies with clearly defined leadership who can potentially be prosecuted, it’s clear who to ask when seeking to censor content.

Second, because these companies completely control the software stack of how that content is ingested, stored, curated, and served, the companies are able to comply with such requests. An example of a structural change that makes it nearly impossible to comply with surveillance requests is when WhatsApp moved to using end-to-end encryption for users’ messages–WhatsApp itself, despite being part of Facebook, actually cannot reveal unencrypted data to anyone who might request it, because they only store encrypted messages on their servers, and not the decryption keys. A key 2021 goal of this paper is to explore these types of structural changes.


Risk​ ​2:​ ​Curatorial​ ​Bias​ ​/​ ​Indirect​ ​Censorship

In recent years, questions have been raised regarding the potential for unintentional or international biases to be embedded in the curation algorithms of major platforms like Facebook. Building on research from Robert Epstein and Ron Robertson that suggest Google could tip an election by optimizing its search results.

This has led to some interesting legal battles in courts that seek to pressure WhatsApp to share user data. For example, in 2016 a Brazilian judge temporarily ordered the shutdown of the service after the company failed to comply with a request for encrypted data.

Zittrain notes that Facebook could influence electoral behavior by controlling what messages different readers see.

More subtle, but no less worrisome, are the unintentional ways in which Facebook and others tend to optimize for viral, feel-good content that will garner a large number of “likes.” For example, in 2014 Facebook came under fire from media critics, who pointed out that there were marked differences in the way that Facebook and Twitter covered the outbreak of the protests in Ferguson, Missouri during the summer of 2014. Recent scholarly work has demonstrated the critical role that Twitter played in 24 bringing these protests to the national spotlight. Thanks to organic grassroots 25 conversation about what was going on the ground in Ferguson, Twitter was able to surface breaking news from the frontlines, well before mainstream media had picked up the story.

In contrast, on Facebook, the most prominent story found on most Americans’ newsfeeds at the time was the Ice Bucket Challenge, a fundraising campaign for research to cure Lou Gehrig’s disease. As media scholar Zeynep Tufekçi pointed out, the Ice Bucket Challenge was perfect for the Facebook algorithm–it was viral, feel-good content–whereas more difficult and nuanced conversations about race and police violence could only be found on a platform with less top-down algorithmic influence.26 The urgency of this debate has significantly heightened in recent months, as individuals from across the political spectrum have expressed concerns about the proliferation of “fake news,” or click-bait headlines that confirm voters’ pre-existing political preferences and beliefs at the expense of fact-based coverage of current events.


Given the significant amount of leverage that social media platforms like Facebook and Twitter have over the content we consume (both online and offline, via indirect influence over mainstream media coverage), this incident has raised important questions about how unintentional bias manifests in the curation of content on these sites. Much of this debate centers on the need for greater transparency and accountability for the way today’s curation algorithms are constructed. As Tufekçi points out, “I wonder: What if Ferguson had started to bubble, but there was no Twitter to catch on nationally? Would it ever make it through the algorithmic filtering on Facebook? Maybe, but with no transparency to the decisions, I cannot be sure.”

Yet, the concept of transparency is not nearly as straightforward in the age of algorithmic curation. Curation algorithms are complex, living pieces of code that evolve over time. We are only beginning to develop the analytical frameworks necessary for understanding how slippery concepts like “bias” and “fake news” are encoded into algorithmic decision-making processes. We are even further from understanding how to translate those frameworks into practical accountability procedures.

Zittrain has advocated for greater transparency and ethical standards in how algorithms are designed and broadly implemented on major social media sites, arguing that “The most important fail-safe is the threat that a significant number of users, outraged by a betrayal of trust, would adopt alternative services, hurting the responsible company’s revenue and reputation.” Zittrain points to the potential for transparency to fuel competition-driven consumer protections, whereby consumers make decisions about what platform to use based on the reputation and curation decisions made by the site. Of course, Zittrain’s proposal requires interoperability across platforms and low switching costs in order for it to be practical to leave on social network to join a different one. Ultimately, Zittrain’s solutions face the same problem Rebecca MacKinnon’s Ranking Digital Rights project struggles with: increasing transparency about platform behavior is most impactful when users can actually switch platforms.

This emphasis on competition stands in contrast to the “benevolent monopoly” paradigm proposed by prominent technologists such as Peter Thiel, who argues that large companies without significant competitors can be more creative and effective in developing new, valuable services for their customers. These two paradigms are not 30 mutually exclusive. The concept of an “information fiduciary” could be useful for ensuring that mega-platforms are checked for blatant abuses of their immense curatorial power, whereas competition might be the best way to fuel a healthier ecosystem of consumer choice for those concerned about a broader set of biases in the way that their newsfeeds are curated.

However, if competition is ever going to be a meaningful path towards resolving these issues, we must develop practical methods for lowering the costs of switching between different platform providers. In subsequent case studies, we will discuss the challenges of overcoming network effects and data lock-in, as well as explore strategies that might enable greater competition between incumbent mega-platforms and new platform alternatives.


Risk​ ​3:​ ​Abuse​ ​of​ ​Curatorial​ ​Power

In recent months, Facebook has come under fire due to accusations that its employees systematically suppress the discovery of conservative content on their platform. These accusations were sparked by anonymous accounts from former Facebook employees, who claimed that they routinely removed conservative-leaning news stories from the network’s influential “Trending” news section, even when such stories were identified as a hot topic by the platform’s curation algorithm. Facebook has since denied the claims, saying that the company "found no evidence that the anonymous allegations are true."

Regardless of whether the accusations hold weight, the most important take-away is that it would be very difficult for an outside observer to detect such changes. The company has no legal or normative obligation to disclose how it prioritizes content on its site. Determining how the company automatically identifies content to remove, or how it prioritizes the display of certain content requires “algorithmic auditing”, which is difficult to conduct and may not be possible under existing laws and regulations.

Moreover, the influence of mega-platforms like Facebook is not limited simply to the curation of external content. It is also perhaps the most powerful broker of social influence and signaling online. In 2010, Facebook ran a pilot to understand the impact of “political mobilization messages” on voter turnout for that year’s U.S. Congressional election. Researchers found that users were .39 percent more likely to vote if they were notified when their close friends had voted on Facebook. During the 2010 midterm 34 elections, that translated to an estimated 340,000 additional votes, a margin that could have changed the outcome of close high-stakes elections, such as the 2000 U.S. presidential election, especially if applied selectively (i.e. if Facebook had urged members of one party to vote and not provided similar nudges to the other side.). Mechanisms like this could play a significant role in influencing human behavior on an unprecedented scale, yet we have no checks and balances in place to ensure that this influence is not abused.

These developments have sparked growing concerns over the potential for Facebook to intentionally influence important civic events, such as the 2016 presidential election. In a statement released earlier this summer, the Republican party expressed such concerns, saying “With 167 million US Facebook users reading stories highlighted in the trending section, Facebook has the power to greatly influence the presidential election. It is beyond disturbing to learn that this power is being used to silence viewpoints and stories that don't fit someone else's agenda."

Historically, major media outlets have been viewed as both private entities and public service institutions, beholden to government regulations that seek to ensure that broadcast content serves the public interest. For example, under a law passed in 1934, the Federal Communications Commission requires “legally qualified” political candidates to have equal opportunities for airtime on broadcast TV and radio stations. The FCC promised to enforce this law earlier in 2015, after long-shot political candidate Lawrence Lessig filed several requests with NBC affiliates to speak on air after Hillary Clinton was invited to guest star on the popular Saturday Night Live Show.

This example is quite tricky–the fairness doctrine was repealed in the 1980s under Reagan and these prescriptions are much weaker than they used to be. As of now, that precedent has not carried over into the digital sphere. But as more and more of our media migrates over to digital, networked spaces one must ask whether or not such regulations should be extended to these realms as well. Perhaps it is okay to have just a few large platforms serve as content curators, as long as we can understand and hold them accountable for the way they wield that curatorial influence. However, it remains unclear how to translate concepts of public accountability to the digital sphere of networked publishing.


Risk​ ​4:​ ​Exclusion

Mega-platforms like Facebook and Twitter aren’t just sites for passive consumption of content. They also provide important civic spaces for social and political discourse. The idea that underlies the civic media movement is that making and disseminating media is a form of civic engagement and power. The current opportunities to make media are unprecedented. An estimated one in four people in the world have an active Facebook account, and hundreds of millions more are connected 37 by other large, centralized social networks.

In theory, this massive networked public sphere provides an unprecedented opportunity for everyday people to reach a global audience and engage in conversations with people from around the world. But the reality is not so straightforward. As adoption of Facebook has grown, so has the complexity of implementing effective community governance policies and user safeguards. Terms of service and community regulation efforts have unintended consequences, which are increasingly exacerbated the more monolithic the platform becomes. Media activist Jillian York has highlighted a wide range of groups who have been excluded and censored on the site, ranging from plus-sized women and LGBT groups to journalists and indigenous communities.

In some cases, exclusion is the result of clunky terms of service–such as when Facebook’s real name policy made it challenging for members of the transgender community to open and maintain accounts under adopted names or pseudonyms, used widely within the LGBT community. While the policy was intended to help minimize the 39 number of inactive and fake accounts on the platform, it inadvertently excluded individuals with non-traditional names and those who need to use pseudonyms in order to protect their real identity (i.e. activists living under oppressive political regimes). In other instances, Facebook’s community governance standards have been misused to erase people whose personal situations are in conflict with mainstream norms and practices. This was the case when photos of topless aboriginal women and breastfeeding mothers were mislabeled as inappropriate content by other Facebook users because their breasts were uncovered.

But perhaps the most blatant examples of abuse of community standards stems from intergroup conflict–when one set of users actively seeks to suppress content from another group. This was the case in 2010 when some users formed a Facebook group called “Facebook Pesticide,” with the expressed purpose of reporting and removing outspoken Arab atheists and Muslim reformists from the site. To accomplish this goal, members of the group would coordinate reports of abuse against accounts they deemed unacceptable. While Facebook does not make explicit exactly how they choose to take down profiles, it seems that the platform automatically disables accounts after a certain number of reports are submitted. Automated enforcement of terms of service amplifies these problems of online speech. It's not feasible–or legally desirable–for Facebook to monitor the speech of over one billion members. Instead, they rely on reports from other users.

If several users flag content as inappropriate, it will likely be deleted. The technique is particularly effective when used on content in languages that Facebook's administrators don't read, such as Arabic. The platform does not inform users when their profile has been removed, nor the reason for de-activation from the site. This makes it challenging for users to seek recourse and reintegration back into the site. These issues are exacerbated by the lack of alternative publishing networks with comparable reach around the globe. Given that Facebook is the most widely used social network in the world, those who are excluded from the site face serious consequences.

This is not a theoretical problem. Users are blocked every day from Facebook as part of ongoing political disputes. Israeli activists, non-government groups and government departments frequently flag Facebook accounts of Palestinian journalists and activists, seeking their removal from the platform. As the political climate in the US 43 grows more tense after Donald Trump’s election, some Facebook users report that they have been flagged and suspended from the service for expressing unpopular political opinions.

Exile from the platform not only makes it hard to engage in important civic discourse, but it also has important implications for how Internet users are able to access a broader range of services outside the site. In 2008, Facebook rolled out a new service called Facebook Connect, which allows third-party sites to piggyback off of the platform’s robust identity authentication and management system, rather than implement their own from scratch. In many ways, this service is a win-win for both users and websites, as it significantly lowers the costs of building out a secure identity infrastructure for smaller sites, while also simplifying account management for end users by minimizing the number of usernames and passwords they must remember. As a growing number of sites adopt Facebook Connect, some have likened the service to a kind of “driver’s license for the Internet,” the new de-facto standard for identity on the web. But for all the benefits we gain in convenience, we must consider the equally 44 serious risks of widespread exclusion this trend poses for those who are unable to access the Facebook platform, due to conflicts with clunky terms of service and abuse of community governance guidelines.

As the above examples illustrate, terms of service on mega-platforms, even when thoughtfully authored and enforced, can have far-reaching unintended consequences. When speech and access are limited in this fashion, those speaking have few alternatives. They can try to influence the platform owners, often by naming and shaming, publicly decrying their ill treatment. But power asymmetries make that prospect difficult. For example, in spite of continuous lobbying from the LGBTQ community, in partnership with organizations like the Electronic Frontier Foundation, Facebook has yet to implement significant changes to their real name policy. Of 45 course, members of the LGBTQ community could publish on their own, but they lose the network effects of a system like Facebook, and they find themselves using less user-friendly tools and reaching smaller audiences.

It remains unclear how mega-platforms like Facebook should go about balancing the needs of marginalized groups with the broader goal of keeping the mainstream safe on a global scale. What is clear is that these tech companies operate and own a new sphere of influence, one which has transformed the Internet from a public commons to a gated corporate community. As more speech moves online, the ability of Facebook and other platforms to determine who can participate in important civic conversations becomes deeply concerning." (http://dci.mit.edu/assets/papers/decentralized_web.pdf)

Source

  • Report: Defending​ ​Internet​ ​Freedom​ ​through​ ​Decentralization: Back​ ​to​ ​the​ ​Future? By Chelsea​ ​Barabas, Neha​ ​Narula and Ethan​ ​Zuckerman. The​ ​Center​ ​for​ ​Civic​ ​Media​ ​& The​ ​Digital​ ​Currency​ ​Initiative. MIT Media Lab, 2017

URL = http://dci.mit.edu/assets/papers/decentralized_web.pdf


More information