Technological Power in the Networked Age

From P2P Foundation
Jump to navigation Jump to search

Typology

By K. Sabeel Rahman:

"The problems of technology have come into sharper focus. But this has brought difficulties of its own: technological power today operates in distinctive ways that make it both more dangerous and potentially more difficult to contest.

First, there is transmission power. This is the ability of a firm to control the flow of data or goods. Take Amazon: as a shipping and logistics infrastructure, it can be seen as directly analogous to the railroads of the nineteenth century, which enjoyed monopolized mastery over the circulation of people, information, and commodities. Amazon provides the literal conduits for commerce.

On the consumer side, this places Amazon in a unique position to target prices and influence search results in ways that maximize its returns, and also favor its preferred producers. On the producer side, Amazon can make or break businesses and whole sectors, just like the railroads of yesteryear. Book publishers have long voiced concern about Amazon’s dominance, but this infrastructural control now extends to other kinds of retail activity, as third-party producers and sellers depend on Amazon to carry their products and to fairly reflect them in consumer searches.

As some studies indicate, Amazon will often deploy its vast trove of consumer data to identify successful third-party products which it can then displace through its own branded versions, priced at predatorily low levels to drive out competition. This is also the kind of infrastructural power exercised by internet service providers (ISPs) in the net neutrality context, through their control of the channels of data transmission. Their dominance raises similar concerns: just as Amazon can use its power to prevent producers from reaching consumers, ISPs can block, throttle, or prioritize preferred types of information.

A second type of power arises from what we might think of as a gatekeeping power. Here, the issue is not necessarily that the firm controls the entire infrastructure of transmission, but rather that the firm controls the gateway to an otherwise decentralized and diffuse landscape.

This is one way to understand the Facebook News Feed, or Google Search. Google Search does not literally own and control the entire internet. But it is increasingly true that for most users, access to the internet is mediated through the gateway of Google Search or YouTube’s suggested videos. By controlling the point of entry, Google exercises outsized influence on the kinds of information and commerce that users can ultimately access—a form of control without complete ownership.

Crucially, gatekeeping power subordinates two kinds of users on either end of the “gate.” Content producers fear hidden or arbitrary changes to the algorithms for Google Search or the Facebook News Feed, whose mechanics can make the difference between the survival and destruction of media content producers. Meanwhile, end users unwittingly face an informational environment that is increasingly the product of these algorithms—which are optimized not to provide accuracy but to maximize user attention spent on the site. The result is a built-in incentive for platforms like Facebook or YouTube to feed users more content that confirms preexisting biases and provide more sensational versions of those biases, exacerbating the fragmentation of the public sphere into different “filter bubbles.”

These platforms’ gatekeeping decisions have huge social and political consequences. While the United States is only now grappling with concerns about online speech and the problems of polarization, radicalization, and misinformation, studies confirm that subtle changes—how Google ranks search results for candidates prior to an election, for instance, or the ways in which Facebook suggests to some users rather than others that they vote on Election Day—can produce significant changes in voting behavior, large enough to swing many elections.

A third kind of power is scoring power, exercised by ratings systems, indices, and ranking databases. Increasingly, many business and public policy decisions are based on big data-enabled scoring systems. Thus employers will screen potential applicants for the likelihood that they may quit, be a problematic employee, or participate in criminal activity. Or judges will use predictive risk assessments to inform sentencing and bail decisions.

These scoring systems may seem objective and neutral, but they are built on data and analytics that bake into them existing patterns of racial, gender, and economic bias. For example, employers might screen out women likely to become pregnant or people of color who already are disproportionately targeted by the criminal justice system. This allows firms to engage in a kind of employment discrimination that would normally be illegal if it took place in the workplace itself. But these scoring systems allow for screening even before the employer is involved in a face-to-face interaction with the candidate.

Scoring power is not a new phenomenon. Consider the way that financial firms gamed the credit ratings agencies to mark toxic mortgage backed assets as “AAA,” enabling them to extract immense profits while setting up the world economy for the 2008 financial crisis. But what big data and the proliferation of AI enable is the much wider use of similarly flawed scoring systems. As these systems become more widespread, their power—and risk—magnifies.

Each of these forms of power is infrastructural. Their impact grows as more and more goods and services are built atop a particular platform. They are also more subtle than explicit control: each of these types of power enable a firm to exercise tremendous influence over what might otherwise look like a decentralized and diffused system.

This is the paradox of technological power in a networked age. Where a decade or two ago, these technologies may have seemed intrinsically decentralizing, they have in fact enabled new forms of concentrated power and control through transmission, gateways, and scoring. These forms of power, furthermore, often operate in the background, opaque and hidden from view. This makes them harder to challenge and contest." (https://logicmag.io/04-the-new-octopus/)


Discussion

New government institutions for oversight

By K. Sabeel Rahman:

"To the extent that we doubt the efficacy and independence of self-regulation, we might create new government institutions for oversight. These agencies would have to leverage interdisciplinary expertise in data, law, ethics, sociology, and other fields in order to monitor and manage the activities of technological infrastructure whether in their transmission, gatekeeping, or scoring forms.

Along these lines, several scholars have suggested the formation of regulatory bodies to assess algorithms, the use of big data, search engines, and the like, subjecting them to risk assessments, audits, and some form of public participation. Government oversight could attempt to ensure that firms respect values like nondiscrimination, neutrality, common carriage, due process, and privacy. These regulatory institutions would monitor compliance and continue to revise standards over time.

Yet both self-governance and regulatory oversight depend to some degree on the human capacities of the overseers, whether private or public. Call these managerial strategies for checking concentrated power. The problem with managerialism is that even if we built a powerful, independent, and accountable public (or private) oversight regime, it would face the difficulties endured by any regulator of a complex system: industry is likely to be several steps ahead of government, especially if it is incentivized to seek returns by bypassing regulatory constraints. Furthermore, the efficacy of regulation will turn entirely on the skill, commitment, creativity, and independence of regulators themselves.

A more radical response, then, would be to impose structural restraints: limits on the structure of technology firms, their powers, and their business models, to forestall the dynamics that lead to the most troubling forms of infrastructural power in the first place.

One solution would be to convert some of these infrastructures into “public options”—publicly managed alternatives to private provision. Run by the state, these public versions could operate on equitable, inclusive, and nondiscriminatory principles. Public provision of these infrastructures would subject them to legal requirements for equal service and due process. Furthermore, supplying a public option would put competitive pressures on private providers.

The public option solution is not a new one. Our modern-day public utilities, from water to electricity, emerged out of this very concern that certain kinds of infrastructure are too important to be left in private hands. This infrastructure doesn’t have to be physical: during the reform debate after the financial crisis, for example, there was a proposal to provide a public alternative to for-profit credit ratings agencies, to break the oligopoly of those ratings companies and their rampant conflicts of interest.

What would public options look like in a technological context? Municipally owned broadband networks can provide a public alternative to private ISPs, ensuring equitable access and putting competitive pressure on corporate providers. We might even imagine publicly owned search engines and social media platforms—perhaps less likely, but theoretically possible.

We can also introduce structural limits on technologies with the goal of precluding dangerous concentrations of power. While much of the debate over big data and privacy has tended to emphasize the concerns of individuals, we might view a robust privacy regime as a kind of structural limit: if firms are precluded from collecting or using certain types of data, that limits the kinds of power they can exercise.

Usually privacy concerns are framed as a matter of individual rights: the user’s privacy is invaded by firms collecting data. But if we take seriously the types of technological power sketched above, then privacy acquires a larger significance. It becomes not just a personal issue but a structural one: a way to limit the kinds of data that firms can collect, in turn reducing the risk of arbitrary and biased technological power. Such privacy rules can be achieved by legal mandate and regulation, or through proposed technological tools to deliberately corrupt some of the data that platforms collect on users.

Tax policy could also play a role. Some commentators have proposed a “big data tax” as another structural inhibitor of some kinds of big data and algorithmic uses. Just as a financial transactions tax would cut down on short-term speculation in the stock market, a big data tax would reduce the volume of data collected. Forcing companies to collect less data would structurally limit the kinds of risky or irresponsible uses to which such data can be directed.

Finally, antitrust-style restrictions on firms might reduce problematic conflicts of interest. For example, we might limit practices of vertical integration: Amazon might be forbidden from being both a platform and a producer of its own goods and content sold on its own platform, as a way of preventing the incentive to self-deal. Indeed, in some cases we might take a conventional antitrust route, and break up big companies into smaller ones." (https://logicmag.io/04-the-new-octopus/)