Protocollary Power

From P2P Foundation
Jump to navigation Jump to search

Protocally Power is a concept developed by Alexander Galloway in his book Protocol, to denote the new way power and control are exercized in distributed networks.

See also: Architecture of Control ; & Computing Regime


From Alexander Galloway in his book Protocol:

"Protocol is not a new word. Prior to its usage in computing, protocol referred to any type of correct or proper behavior within a specific system of conventions. It is an important concept in the area of social etiquette as well as in the fields of diplomacy and international relations. Etymologically it refers to a fly-leaf glued to the beginning of a document, but in familiar usage the word came to mean any introductory paper summarizing the key points of a diplomatic agreement or treaty.

However, with the advent of digital computing, the term has taken on a slightly different meaning. Now, protocols refer specifically to standards governing the implementation of specific technologies. Like their diplomatic predecessors, computer protocols establish the essential points necessary to enact an agreed-upon standard of action. Like their diplomatic predecessors, computer protocols are vetted out between negotiating parties and then materialized in the real world by large populations of participants (in one case citizens, and in the other computer users). Yet instead of governing social or political practices as did their diplomatic predecessors, computer protocols govern how specific technologies are agreed to, adopted, implemented, and ultimately used by people around the world. What was once a question of consideration and sense is now a question of logic and physics.

To help understand the concept of computer protocols, consider the analogy of the highway system. Many different combinations of roads are available to a person driving from point A to point B. However, en route one is compelled to stop at red lights, stay between the white lines, follow a reasonably direct path, and so on. These conventional rules that govern the set of possible behavior patterns within a heterogeneous system are what computer scientists call protocol. Thus, protocol is a technique for achieving voluntary regulation within a contingent environment.

These regulations always operate at the level of coding--they encode packets of information so they may be transported; they code documents so they may be effectively parsed; they code communication so local devices may effectively communicate with foreign devices. Protocols are highly formal; that is, they encapsulate information inside a technically defined wrapper, while remaining relatively indifferent to the content of information contained within. Viewed as a whole, protocol is a distributed management system that allows control to exist within a heterogeneous material milieu.

It is common for contemporary critics to describe the Internet as an unpredictable mass of data--rhizomatic and lacking central organization. This position states that since new communication technologies are based on the elimination of centralized command and hierarchical control, it follows that the world is witnessing a general disappearance of control as such.

This could not be further from the truth. I argue in this book that protocol is how technological control exists after decentralization. The "after" in my title refers to both the historical moment after decentralization has come into existence, but also--and more important--the historical phase after decentralization, that is, after it is dead and gone, replaced as the supreme social management style by the diagram of distribution." (

Citations on Design as a function of Protocollary Power

Mitch Ratfliffe:

"Yes, networks are grown. But the medium they grow in, in this case the software that supports them, is not grown but designed & architected. The social network ecosystem of the blogosphere was grown, but the blog software that enabled it was designed. Wikis are a socially grown structure on top of software that was designed. It's fortuitous that the social network structures that grew on those software substrates turn out to have interesting & useful properties.

With a greater understanding of which software structures lead to which social network topologies & what the implications are for the robustness, innovativeness, error correctiveness, fairness, etc. of those various topologies, software can be designed that will intentionally & inevitably lead to the growth of political social networks that are more robust, innovative, fair & error correcting." (

Mitch Kapor on 'Politics is Architecture'

"“politics is architecture": The architecture (structure and design) of political processes, not their content, is determinative of what can be accomplished. Just as you can’t build a skyscraper out of bamboo, you can’t have a participatory democracy if power is centralized, processes are opaque, and accountability is limited." (

Power in Networks: Virtual Location (V. Krebs)

"In social networks, location is determined by your connections and the connections of those around you – your virtual location.

Two social network measures, Betweenness and Closeness, are particularly revealing of a node’s advantageous or constrained location in a network. The values of both metrics are dependent upon the pattern of connections that a node is embedded in. Betweenness measures the control a node has over what flows in the network – how often is this node on the path between other nodes? Closeness measures how easily a node can access what is available via the network – how quickly can this node reach all others in the network? A combination where a node has easy access to others, while controlling the access of other nodes in the network, reveals high informal power." (

Fred Stutzman on Pseudo-Govermental Decisions in Social Software

"When one designs social software, they are forced to make pseudo-governmental decisions about how the contained ecosystem will behave. Examples of these decisions include limits on friending behavior, limits on how information in a profile can be displayed, and how access to information is restricted in the ecosystem. These rules create and inform the structural aspects of the ecosystem, causing participants in the ecosystem to behave a specific way.

As we use social software more, and social software more neatly integrates with our lives, a greater portion of our social rules will come to be enforced by the will of software designers. Of course, this isn't new - when we elected to use email, we agree to buy into the social consequences of email. Perhaps because we are so used to making tradeoffs when we adopt social technology, we don't notice them anymore. However, as social technology adopts a greater role in mediating our social experience, it will become very important to take a critical perspective in analyzing how the will of designers change us." (

Philippe Zafirian on the Two Faces of the Control Society

Philippe Zafirian's citation suggests that protocollary power is related to a shift with the 'Society of Control', from disciplinary control, to the control of engagement.

"Gilles Deleuze, commentant Foucault, a développé une formidable intuition : nous basculons, disait-il, de la société disciplinaire dans la société de contrôle. Ou, pour dire les choses de manière légèrement différente, de la société de contrôle disciplinaire à la société de contrôle d'engagement . Sous une première face, on pourra interpréter ce contrôle comme une forme d'exercice d'un pouvoir de domination, d'un pouvoir structurellement inégalitaire, agissant de manière instrumentale sur l'action des autres. Ce contrôle d'engagement se distingue, en profondeur, du contrôle disciplinaire en ce qu'il n'impose plus le moule des "tâches", de l'assignation à un poste de travail, de l'enfermement dans la discipline d'usine. Il n'enferme plus, ni dans l'espace, ni dans le temps. Il cesse de se présenter comme clôture dans la cellule d'une prison, elle-même placée sous constante surveillance. Selon l'intuition de Deleuze, on passe du moule à la modulation, de l'enfermement à la circulation à l'air libre, de l'usine à la mobilité inter-entreprises. Tout devient modulable : le temps de travail, l'espace professionnel, le lien à l'entreprise, les résultats à atteindre, la rémunération… La contractualisation entre le salarié et l'employeur cesse elle-même d'être rigide et stable. Elle devient perpétuellement renégociable. Tout est en permanence susceptible d'être remis en cause, modifié, altéré." (


Algorithmic Power

Nicholas Diakopoulos:


"Prioritization, ranking, or ordering serves to emphasize or bring attention to certain things at the expense of others. The city of New York uses prioritization algorithms built atop reams of data to rank buildings for fire-code inspections, essentially optimizing for the limited time of inspectors and prioritizing the buildings most likely to have violations that need immediate remediation. Seventy percent of inspections now lead to eviction orders from unsafe dwellings, up from 13 percent without using the predictive algorithm—a clear improvement in helping inspectors focus on the most troubling cases. Prioritization algorithms can make all sorts of civil services more efficient. For instance, predictive policing, the use of algorithms and analytics to optimize police attention and intervention strategies, has been shown to be an effective crime deterrent. Several states are now using data and ranking algorithms to identify how much supervision a parolee requires. In Michigan, such techniques have been credited with lowering the recidivism rate by 10 percent since 2005. Another burgeoning application of data and algorithms ranks potential illegal immigrants so that higher risk individuals receive more scrutiny.10 Whether it’s deciding which neighborhood, parolee, or immigrant to prioritize, these algorithms are really about assigning risk and then orienting official attention aligned with that risk. When it comes to the question of justice though, we ought to ask: Is that risk being assigned fairly and with freedom from malice or discrimination? Embedded in every algorithm that seeks to prioritize are criteria, or metrics, which are computed and used to define the ranking through a sorting procedure.

These criteria essentially embed a set of choices and value-propositions that determine what gets pushed to the top of the ranking. Unfortunately, sometimes these criteria are not public, making it difficult to understand the weight of different factors contributing to the ranking. For instance, since 2007 the New York City Department of Education has used what’s known as the value-added model (VAM) to rank about 15 percent of the teachers in the city. The model’s intent is to control for individual students’ previous performance or special education status and compute a score indicating a teacher’s contribution to students’ learning. When media organizations eventually obtained the rankings and scores through a Freedom of Information Law (FOIL) request, the teacher’s union argued that, “the reports are deeply flawed, subjective measurements that were intended to be confidential.” Analysis of the public data revealed that there was only a correlation of 24 percent between any given teacher’s scores across different pupils or classes.

This suggests the output scores are very noisy and don’t precisely isolate the contribution of the teacher. What’s problematic in understanding why that’s the case is the lack of accessibility to the criteria that contributed to the fraught teacher rankings. What if the value-proposition of a certain criterion’s use or weighting is political or otherwise biased, intentionally or not?


Classification decisions involve categorizing a particular entity as a constituent of a given class by looking at any number of that entity’s features. Classifications can be built off of a prioritization step by setting a threshold (e.g., anyone with a GPA above X is classified as being on the honor roll), or through more sophisticated computing procedures involving machine learning or clustering. Google’s Content ID is a good example of an algorithm that makes consequential classification decisions that feed into filtering decisions12. Content ID is an algorithm that automatically scans all videos uploaded to YouTube, identifying and classifying them according to whether or not they have a bit of copyrighted music playing during the video. If the algorithm classifies your video as an infringer it can automatically remove (i.e., filter) that video from the site, or it can initiate a dialogue with the content owner of that music to see if they want to enforce a copyright. Forget the idea of fair use, or a lawyer considering some nuanced and context-sensitive definition of infringement, the algorithm makes a cut-and-dry classification decision for you. Classification algorithms can have biases and make mistakes though; there can be uncertainty in the algorithm’s decision to classify one way or another. Depending on how the classification algorithm is implemented there may be different sources of error. For example, in a supervised machine-learning algorithm, training data is used to teach the algorithm how to place a dividing line to separate classes. Falling on either side of that dividing line determines to which class an entity belongs. That training data is often gathered from people who manually inspect thousands of examples and tag each instance according to its category.

The algorithm learns how to classify based on the definitions and criteria humans used to produce the training data, potentially introducing human bias into the classifier.

In general, there are two kinds of mistakes a classification algorithm can make—often referred to as false positives and false negatives. Suppose Google is trying to classify a video into one of two categories: “infringing” or “fair use.” A false positive is a video classified as “infringing” when it is actually “fair use.” A false negative, on the other hand, is a video classified as “fair use” when it is in fact “infringing.” Classification algorithms can be tuned to make fewer of either of those mistakes. However, as false positives are tuned down, false negatives will often increase, and vice versa. Tuned all the way toward false positives, the algorithm will mark a lot of fair use videos as infringing; tuned the other way it will miss a lot of infringing videos altogether. You get the sense that tuning one way or the other can privilege different stakeholders in a decision, implying an essential value judgment by the designer of such an algorithm14. The consequences or risks may vary for different stakeholders depending on the choice of how to balance false positive and false negative errors. To understand the power of classification algorithms we need to ask: Are there errors that may be acceptable to the algorithm creator, but do a disservice to the public? And if so, why was the algorithm tuned that way?


Association decisions are about marking relationships between entities. A hyperlink is a very visible form of association between webpages. Algorithms exist to automatically create hyperlinks between pages that share some relationship on Wikipedia for instance. A related algorithmic decision involves grouping entities into clusters, in a sort of association en masse. Associations can also be prioritized, leading to a composite decision known as relevance. A search engine prioritizes the association of a set of webpages in response to a query that a user enters, outputting a ranked list of relevant pages to view. Association decisions draw their power through both semantics and connotative ability. Suppose you’re doing an investigation of doctors known to submit fraudulent insurance claims. Several doctors in your dataset have associations to known fraudsters (e.g., perhaps they worked together at some point in the past). This might suggest further scrutinizing those associated doctors, even if there’s no additional evidence to suggest they have actually done something wrong. IBM sells a product called InfoSphere Identity Insight, which is used by various governmental social service management agencies to reduce fraud and help make decisions about resource allocation. The system is particularly good at entity analytics, building up context around people (entities) and then figuring out how they’re associated. One of the IBM white papers for the product points out a use case that highlights the power of associative algorithms.15The scenario depicted is one in which a potential foster parent, Johnson Smith, is being evaluated. InfoSphere is able to associate him, through a shared address and phone number, with his brother, a convicted felon. The paper then renders judgment: “Based on this investigation, approving Johnson Smith as a foster parent is not recommended.” In this scenario the social worker would deny a person the chance to be a foster parent because he or she has a felon in the family. Is that right? In this case because the algorithm made the decision to associate the two entities, that association suggested a particular decision for the social worker. Association algorithms are also built on criteria that define the association. An important metric that gets fed into many of these algorithms is a similarity function, which defines how precisely two things match according to the given association. When the similarity reaches a particular threshold value, the two things are said to have that association. Because of their relation to classification then, association decisions can also suffer the same kinds of false positive and false negative mistakes.


The last algorithmic decision I’ll consider here is filtering, which involves including or excluding information according to various rules or criteria. Indeed, inputs to filtering algorithms often take prioritizing, classification, or association decisions into account. In news personalization apps like Zite or Flipboard news is filtered in and out according to how that news has been categorized, associated to the person’s interests, and prioritized for that person. Filtering decisions exert their power by either over-emphasizing or censoring certain information. The thesis of Eli Pariser’s The Filter Bubble16is largely predicated on the idea that by only exposing people to information that they already agree with (by overemphasizing it), it amplifies biases and hampers people’s development of diverse and healthy perspectives. Furthermore, there’s the issue of censorship. Weibo, the Chinese equivalent to Twitter, uses computer systems that constantly scan, read, and censor any objectionable content before it’s published. If the algorithm isn’t sure, a human censor is notified to take a look." (

In French

Exceptionally in French, but very important distinctions about 3 different control and governance mechanisms are made here, by Christophe Benavent:

Trois grands ensembles de techniques peuvent être discernés : les algorithmes de l’ordre de la technique informatique, les « policies » de l’ordre de la technique réglementaire et sécuritaire et les systèmes motivationnels de l’ordre des techniques psychologiques.

Des techniques dans tous les cas.

  • des règles de calculs :

Les techniques par lesquelles les plateformes gouvernent les populations passent essentiellement par des moyens algorithmiques qui prennent différentes figures : un ordre de présentation des profils (filtrage), le contenu même de ces profils (signaling), les modes de calcul pour suggérer des profils , des systèmes de gratifications, des feedback synthétiques (dashboard) . Ce sont celles des modèles de scoring, des modèles de recommandations, des modèles de ranking. Elles distribuent l’action et l’effort, et régule la distribution des ressources. La question de la politique des algorithmes leurs est consubstantielle. Si ces calculs résultent d’une volonté, ils ne s’y plie pas toujours de part la contingence de sa mise en oeuvre, du hasard dans les choix, de la logique propre des algorithme et peuvent produire des effets inattendus. L’inattendu réside parfois en dehors, comme l’est par exemple l’effet discriminatoire des plateformes mis en évidence par XXX. Ces calculs peuvent être élémentaires (le comptage des likes) ou très sophistiqués (deep learning et reconnaissance de visage). Ils peuvent s’exercer sur la donnée brute, ou se traduire par des règles d’agrégation et de lissage dans ce qu’on appellera indicateurs de gestion. Ces calculs demande naturellement des éléments partiels, qui sont moins des données que des produits. Les notes recueillies ne sont pas exactement les jugements des individus mais le construit de ces jugements, le résultat d’une confrontation entre l’instrument de mesure et le jugement. Ils résultent d’une agentivité mais sont aussi performatif, ils ne disent pas l’état du monde mais modèlent des interaction.

  • des règles de distribution de rôle (droits, obligation et interdictions).

S’ils sont fondamentaux dans toutes les organisations, les plateformes les définissent de manière précise et exclusive. Un droit de lecture est bien distinct de celui d’écriture ou d’administration. Ces règles portent aussi sur ce que l’on peut faire, ou pas, l’interdiction des représentations de nu en est un très bon exemple. D’un point de vue technique, elle correspondent à ce qu’on appele « policies » en anglais, et que l’on traduira simplement par réglement intérieur, mais dont le terme anglais suggère une juste polysémie : la police est la rédaction d’un contrat, comme l’est la police d’assurance, mais aussi l’appareil de maintien de l’ordre et de contrôle des déviances qui assure l’exécution de la règle. La règle demande son application, et l’action de police se traduit généralement par l’exclusion partielle ou totale. A l’age digital le droit pénal est un droit de censure. Il suffit de penser aux politiques de nudité pour en comprendre l’enjeu, espérons qu’aucune plateforme ne modère ainsi les discussions politiques ou religieuses. Les policies sont le bras armés des managers des plateformes. Elles s’expriment d’abord dans un corps de règles qui se stratifient réclamant parfois des rafraichissements – Facebook a la réputation de procéder très régulièrement à cette opération.

  • des règles motivationnelles (sanctions et motivations) sont essentielles.

Elles se sont popularisées au travers de la notion de ludification (gamification) et se matérialisent soit par des gratifications matérielles ( cash bask, points, miles…) ou symboliques ( badges, grades, niveaux). Elles partagent avec le nudge l’idée que la coercition motive moins que les encouragements. Elles jouent avec la comparaison sociale et la confiance en soi. Elles font l’objet de nombreux développement avec une meilleure connaissance des biais cognitifs et des mécanismes motivationnels. Elles s’appuient sur des règles de calculs, font l’objet de règlement, mais trouve leur identité dans l’ajustement aux intérêts des agents, à leur cognition et à leur affectivité." (


Social Values in Technical Code:

"In a paper about the hacker community, Hannemyr compares and contrasts software produced in both open source and commercial realms in an effort to deconstruct and problematize design decisions and goals. His analysis provides us with further evidence regarding the links between social values and software code. He concludes:

"Software constructed by hackers seem to favor such properties as flexibility, tailorability, modularity and openendedness to facilitate on-going experimentation. Software originating in the mainstream is characterized by the promise of control, completeness and immutability" (Hannemyr, 1999).

To bolster his argument, Hannemyr outlines the striking differences between document mark-up languages (like HTML and Adobe PDF), as well as various word processing applications (such as TeX and Emacs verses Microsoft Word) that have originated in open and closed development environments. He concludes that "the difference between the hacker’s approach and those of the industrial programmer is one of outlook: between an agoric, integrated and holistic attitude towards the creation of artifacts and a proprietary, fragmented and reductionist one" (Hannemyr, 1999). As Hannemyr’s analysis reveals, the characteristics of a given piece of software frequently reflect the attitude and outlook of the programmers and organizations from which it emerges" (

Social Media: Re-introducing centralization through the back door

Armin Medosch:

"In media theory much has been made of the one-sided and centralised broadcast structure of television and radio. the topology of the broadcast system, centralised, one-to-many, one-way, has been compared unfavourable to the net, which is a many-to-many structure, but also one-to-many and many-to-one, it is, in terms of a topology, a highly distributed or mesh network. So the net has been hailed as finally making good on the promise of participatory media usage. What so called social media do is to re-introduce a centralised structure through the backdoor. While the communication of the users is 'participatory' and many-to-many, and so on and so forth, this is organised via a centralised platform, venture capital funded, corporately owned. Thus, while social media bear the promise of making good on the emancipatory power of networked communication, in fact they re-introduce the producer-consumer divide on another layer, that of host/user. they perform a false aufhebung of the broadcast paradigm. Therefore I think the term prosumer is misleading and not very useful. while the users do produce something, there is nothing 'pro' as in professional in it.

This leads to a second point. The conflict between labour and capital has played itself out via mechanization and rationalization, scientific management and its refinement, such as the scientific management of office work, the proletarisation of wrongly called 'white collar work', the replacement of human labour by machines in both the factory and the office, etc. What this entailed was an extraction of knowledge from the skilled artisan, the craftsman, the high level clerk, the analyst, etc., and its formalisation into an automated process, whereby this abstraction decidedly shifts the balance of power towards management. Now what happened with the transition from Web 1.0 to 2.0 is a very similar process. Remember the static homepage in html? You needed to be able to code a bit, actually for many non-geeks it was probably the first satisfactory coding experience ever. You needed to set the links yourself and check the backlinks. Now a lot of that is being done by automated systems. The linking knowledge of freely acting networked subjects has been turned into a system that suggests who you link with and that established many relationships involuntarily. It is usually more work getting rid of this than to have it done for you. Therefore Web 2.0 in many ways is actually a dumbing down of people, a deskilling similar to what has happened in industry over the past 200 years.

Wanted to stay short and precise, but need to add, social media is a misnomer. What social media would be are systems that are collectively owned and maintained by their users, that are built and developed according to their needs and not according to the needs of advertisers and sinister powers who are syphoning off the knowledge generated about social relationships in secret data mining and social network analysis processes.

So there is a solution, one which I continue to advocate: lets get back to creating our own systems, lets use free and open source software for server infrastructures and lets socialise via a decentralised landscape of smaller and bigger hubs that are independently organised, rather than feeding the machine ..." (IDC mailing list, Oct 31, 2009)

Discussion 1

Protocollary Power and P2P

Michel Bauwens on the P2P aspects of Protocollary Power:

The P2P era indeed adds a new twist, a new form of power, which we have called Protocollary Power, and has first been clearly identified and analyzed by Alexander Galloway in his book Protocol. We have already given some examples. One is the fact that the blogosphere has devised mechanisms to avoid the emergence of individual and collective monopolies, through rules that are incorporated in the software itself. Another was whether the entertainment industry would succeed in incorporating software or hardware-based restrictions to enforce their version of copyright. There are many other similarly important evolutions to monitor: Will the internet remain a point to point structure? Will the web evolve to a true P2P medium through Writeable Web developments? The common point is this: social values are incorporated, integrated in the very architecture of our technical systems, either in the software code or the hardwired machinery, and these then enable/allow or prohibit/discourage certain usages, thereby becoming a determinant factor in the type of social relations that are possible. Are the algorithms that determine search results objective, or manipulated for commercial and ideological reasons? Is parental control software driven by censorship rules that serve a fundamentalist agenda? Many issues are dependent on hidden protocols, which the user community has to learn to see (as a new form of media literacy and democratic practice), so that it can become an object of conscious development, favoring peer to peer processes, rather than the restrictive and manipulative command and control systems. In P2P systems, the formal rules governing bureaucratic systems are replaced by the design criteria of our new means of production, and this is where we should focus our attention. Galloway suggests that we make a diagram of the networks we participate in, with dots and lines, nodes and edges. Important questions then become: Who decides who can participate?, or better, what are the implied rules governing participation? (since there is no specific 'who' or command in a distributed environment); what kind of linkages are possible? On the example of the internet, Galloway shows how the net has a peer to peer protocol in the form of TCP/IP, but that the Domain Name System is hierarchical, and that an authorative server could block a domain family from operating. This is how power should be analyzed. Such power is not per se negative, since protocol is needed to enable participation (no driving without highway code!), but protocol can also be centralized, proprietary, secret, in that case subverting peer to peer processes. However, the stress on protocol, which concerns what Yochai Benkler calls the 'logical layer' of the networks, should not make us forget the power distribution of the physical layer (who owns the networks), and the content layer (who owns and controls the content).

Protocols are Designed by People

Harry Halpin:

"Galloway is correct to point out that there is control in the internet, but instead of reifying the protocol or even network form itself, an ontological mistake that would be like blaming capitalism on the factory, it would be more suitable to realise that protocols embody social relationships. Just as genuine humans control factories, genuine humans – with names and addresses – create protocols. These humans can and do embody social relations that in turn can be considered abstractions, including those determined by the abstraction that is capital. But studying protocol as if it were first and foremost an abstraction without studying the historic and dialectic movement of the social forms which give rise to the protocols neglects Marx’s insight that

Technologies are organs of the human brain, created bythe human hand; the power of knowledge, objectified.[8]

Bearing protocols’ human origination in mind, there is no reason why they must be reified into a form of abstract control when they can also be considered the solution to a set of problems faced by individuals within particular historical circumstances. If they now operate as abstract forms of control, there is no reason why protocols could not also be abstract forms of collectivity. Instead of hoping for an exodus from protocols by virtue of art, perhaps one could inspect the motivations, finances, and structure of the human agents that create them in order to gain a more strategic vantage point. Some of these are hackers, while others are government bureaucrats or representatives of corporations – although it would seem that hackers usually create the protocols that actually work and gain widespread success. To the extent that those protocols are accepted, this class that I dub the ‘immaterial aristocracy’ governs the net. It behoves us to inspect the concept of digital sovereignty in order to discover which precise body or bodies have control over it." (

There is no Openness without secrecy and control!

Tim Leberecht insists, taking Wikileaks as a case study, that there is no openness without secrecy:

“an ecosystem on the Social Web could be seen as a system in permanent crisis – it is always in flux, and its composition and value are constantly threatened by a multitude of forces, from the inside and the outside. What if we understood “designing for the loss of control” as designing for structures that are in a permanent crisis? Crises are essentially disruptions that shock the system. They are deviations from routines, and the very variance that the advocates of planning and programs (the “Push” model) so despise. At their own peril, because they fail to realize that variance is the mother of all meaning; it is variance that challenges the status quo, pulls people and their passions towards you, and propels innovation. “Designing for the loss of control” means designing for variance.

One system in permanent crisis that contains a high level of variance is WikiLeaks. The most remarkable thing about the site appears to be the dichotomy between the uncompromised transparency it aims at and the radical secrecy it requires to do so. The same organization that depends on the loss of control for its content very much depends on a highly controlled environment to protect itself and keep operating effectively. But not just that: Ironically, secrecy is also a fundamental prerequisite for the appeal of WikiLeaks’ “there are no secrets” claim. Simply put: there is no light without darkness. And there is no WikiLeaks without secrets.

Applied to systems and solutions design, this means that total openness is the antidote to openness. When everything is open, nothing is open. In order to design openness, one of the first decisions designers have to make is therefore to determine what needs to remain closed. This is a strategic task: making negative choices for positive effects. You need to build enough variance into a system to make it “flow” and yet retain some control over the underlying parameters (access, boundaries, authorship, participants, agenda, process, conversation, collaboration, documentation, etc.). Only if you maintain the fundamental ability to at least manage (and modify) the conditions for openness, will you be able to create it. To design for the loss of control, control the parameters that enable it.” (

Marvin Brown onCivic Design

Marvin Brown:

"A citizen is one among the many—one among others. Citizens are members. We are always citizens “of.” “Of what?” Of the many? Yes. But citizens are not mobs or crowds. Citizens are members of civic communities, and citizens create and re-create civic communities. The civic, in other words, comes into existence when we participate in civic conversations as citizens.

Civic conversations are quite different from commercial conversations. Commercial conversations are about commerce—about the exchange and the overall flow of things. Civic conversations are about how we want to live together—about the design of our collective life. Civic conversations should be the context or platform for commercial conversations. Only when we know how we want to live together will we know how to design the flow of things.

In the history of the United States, for the most part, commercial conversations have dominated civic conversations. Still, we have witnessed the rise of the civic, such as in the civil rights movement. And, now, we see it again. The civic is occupying the commercial.

The goal, of course, is not to eliminate commerce, but to civilize it—to have commercial conversations about how to provide for one another on a civic platform of moral equality and reciprocity. Commerce is not the problem. The problem is its separation from civic norms.

When people say, ”We have seen the problem and the problem is us,” they deceive themselves. We are not the problem. The problem is one of design. Our current design of how we live together in unjust and unsustainable, and it is still controlled by commercial conversations without any moral foundation. Those who control financial markets are sovereign. If we expand and protect civic conversations we may, in time, participate in the solution—an economy based on civic norms making provisions for this and future generations." (

Historical Origins of the Internet Protocol

Nicolas Mendoza:

"The principles guiding the early designs of the Internet supposed a deep perversion of traditional models of hierarchical military power. This perversion occurred the moment the military moved from a communications model of command and control to one based on distributed command and control. To understand how this move was possible it is useful to reread the opening words to Reliable digital communications systems using unreliable network repeater nodes , perhaps the most bizarre introduction to a technical paper ever written:

The cloud-of-doom attitude that nuclear war spells the end of the earth is slowly lifting from the minds of the many....A new view emerges: the possibility of a war exists but there is much that can be done to minimize the consequences.

If war does not mean the end of the earth in a black-and-white Manner, then it follows that we should do those things that make the shade of gray as light as Possible: to plan now to minimize potential destruction and to do all those things necessary to permit the survivors of the holocaust to shuck their ashes and reconstruct the economy swiftly."

The author is Paul Baran, and the 1960 paper describes the first ever theoretical model for an entirely digital distributed communications network. When Baran started the research that led to his model, a few years earlier, more than a decade of nuclear threat had been enough to dissolve the euphoric sense of American invulnerability that resulted from WWII, so the survivor and his gray world had to be invented.

Cultural artefacts of the time show an imaginary where there is no paradigmatic survivor but rather a reproduction of class structure as societies go through the experience of the apocalypse. Within the space of the ruling ideological framework, Baran's 'shades of gray' started to emerge. Cold War era movies like When the Wind Blows , The War Game , The Day After , or the Japanese (and post-Hiroshima) Barefoot Gen portray almost identical dramas of lay survivors as they negotiate the dawn of hell on earth. These lay survivors were, however, at best secondarily who the web was created for. Placebos in the form of nuclear emergency contingency pamphlets were the only packages being distributed to them. Their worse-than-death agony was expected, integral part of the ever flourishing collection of nuclear war scenarios. Belonging in this sense to a different category of cultural products of the era are Kubrik’s acclaimed Dr. Strangelove and Herman Kahn’s less acclaimed book On Thermonuclear War , the former a slightly caricaturised version of the terroristic rationality of the latter. These portray a very different perspective of surviving the apocalypse, that of the powerful. Survivability of the elite, even after absolute Doomsday-machine powered annihilation, was initially the one remaining issue.

In between these two extreme experiences of the apocalypse (one mediated by a wooden ‘inner core or refuge’ and a pamphlet, and the other by reinforced concrete and endless sex), existing societies started representing their pre-apocalyptic relationships of power through a new and flourishing ecology: an ever increasing diversity of individualistic bunkers tailored, ironically, to the individual 'nuclear family' and its corresponding social status. For instance, a booklet called The Family Fallout Shelter distributed by the US government."The least expensive shelter described is the Basement Concrete Block Shelter. The most expensive is the Underground Concrete Shelter"

The sanctity of the affordance abyss between the layman and the president was first transgressed by the figure of the secondary commander. Once his needs entered the realm of what is taken seriously after the bomb, the logic of post apocalyptic life (i.e. of the network) had been perverted. It seems now like an insignificant concession, but it was all it took to redraw the diagram of power. In a 1990 interview Paul Baran recalls how the seemingly subtle shift of accommodating for the needs of secondary commanders came to conceptually redefine his model:

The great communications need of the time was a survivable communications capacity that could broadcast a single teletypewriter channel. The term used to describe this need was "minimal essential communications," a euphemism for the President to be able to say "You are authorized to fire your weapons". Or "hold your fire". These are very short messages. The initial strategic concept at that time was if you can build a communications system that could survive and transmit such short messages, that is all that is needed... . The major initial objection to the scheme was its limited bandwidth. The generals would say, "Yes, that would be okay for the President. But I gotta do this, and so and so gotta do this, and that command gotta do that. We need more communication than a single teletypewriter channel." After receiving this message back consistently, I said, "Okay, back to the drawing board. But this time I'm going to give them so damn much communication capacity they won't know what in hell to do with it all." So that became my next objective. Then I went from there to try to design a survivable network with so much more capacity and capability that this common objection to bandwidth limitation would be overcome.

This is the moment when the perversion happened, when the movement toward distributed command and control took place. The limited bandwidth distribution model still reproduced the polarity of power in the sense that it only considered the limited requirements of the President. Boosting bandwidth made it useful for secondary actors. Suddenly, the architecture of the desirable network stopped mimicking the hierarchies of the chain of command. In the aftermath, even in the absence of the top commanders, a network of secondary commanders would have means of communication and perhaps retaliatory power. The shift was reflected not only in the model of distributed communications but at all levels, especially in the characteristics of the data routing protocol. The concept for this protocol received the name of 'hot potato' packet switching.

"Thus, in the system described, each node will attempt to get rid of its messages by choosing alternate routes if its preferred route is busy or destroyed. Each message is regarded as a "hot potato," and rather than hold the "hot potato," the node tosses the message to its neighbour, who will now try to get rid of the message."

In terms of control the 'hot potato' model is a shift from node-centric control to immanent control distributed through the network. Because the node has no control over the full life of a packet, there is no feedback relationship between node and packet. Hence, there is really no nodal control as per Norbert Wiener's seminal definition of control as feedback . Packet control does ultimately happen, but as a result of the whole network informing the packet of the best available route in real time, which is to say that ultimately it is the multitude of packets who are, collectively, in control of themselves. In practical terms this means that end users in Baran's design find themselves in a situation of equipotentiality.

Paul Baran's network was never built but the model he proposed was a major influence in the creation of ARPANET, the primordial web built by the US Department of Defence. In 1973 the need arose to reconcile incompatibilities inherent to diverse data transmission technologies and to communication with other networks like the French network CYCLADES, and so the TCP/IP protocol suite was designed. Because TCP/IP enabled networks of diverse characteristic to communicate, the resulting network-of-networks was called the Inter-net.

Robert Kahn and Vinton Cerf, creators of TCP/IP, describe it as "a simple but very powerful and flexible protocol which provides for variation in individual network packet sizes, transmission failures, sequencing, flow control, and the creation and destruction of process-to-process associations." In the TCP/IP protocol flexibility, simplicity and scalability join survivability as the defining features of the design. However, in a 1988 report called The Design Philosophy of the DARPA Internet Protocols David D. Clark recounts the original objectives of the Internet architecture and discusses “the relation between these goals and the important features of the protocols.


The solution Paul Baran crafted for the problem of network survivability, to essentially distribute power evenly throughout the network, irreparably breaks the social structure that over centuries had revolved around processes of accumulation and consolidation of power. Because it was initially confined to the few who already were supposed to hold power to begin with, distributed power was a tolerable concept. The egalitarian idea of distributed power, paradoxically made possible only as a means of military ‘command and control’ to survive absolute violence, was accepted under the retroactively delusional assumption that the distribution of power wouldn’t affect its concentration.

The struggles in the network emerge from the tension between the contradictory concepts of 'command' and 'control', buried deep in the protocol that governs it. The expression ‘command and control’ describes the abilities to initiate and stop action, respectively. “At its crudest level 'command and control' in nuclear war can be boiled down to this: command means being able to issue the instruction to 'fire' missiles, and control means being able to say 'cease firing'” It can be conflated into a more simple term: ‘power’; as this bipolar attribute is distributed, pre-existing powers experience loss, disorientation and traumatic distress. If we follow Foucault’s propositions according to which power is always relational and that it exists to be exercised, a network for distributed command and control, (i.e. distributed power), creates a situation where centerless power is exercised in all directions. And so, the paranoid genesis of the Net, combined with the absolute impossibility of the military, and even the academic field, to foresee the impact their toy would have in the planet, provided the enormous historical faux-pas in the logic of power that is the Internet. ” (February 2012)

Discussion 2: Typology of Countermeasures

Nicholas Diakopoulos:

"There are a number of human influences embedded into algorithms, such as criteria choices, training data, semantics, and interpretation. Any investigation must therefore consider algorithms as objects of human creation and take into account intent, including that of any group or institutional processes that may have influenced their design. It’s with this concept in mind that I transition into devising a strategy to characterize the power exerted by an algorithm. I’ll start first with an examination of transparency, and how it may or may not be useful in characterizing algorithms. Then I’ll move into how you might employ reverse engineering in the investigation of algorithms, including both theoretical thinking and practical use cases that illustrate the technique. I conclude the section with certain methodological details that might inform future practice in developing an investigative reporting “beat” on algorithms, including issues of how to identify algorithms for investigation, sample them, and find stories.


Transparency, as it relates to algorithmic power, is useful to consider as long as we are mindful of its bounds and limitations. The objective of any transparency policy is to clearly disclose information related to a consequence or decision made by the public—so that whether voting, buying a product, or using a particular algorithm, people are making more informed decisions. Sometimes corporations and governments are voluntarily transparent. For instance, the executive memo from President Obama in 2009 launched his administration into a big transparency-in-government push. Google publishes a biannual transparency report showing how often it removes or discloses information to governments. Public relations concerns or competitive dynamics can incentivize the release of information to the public. In other cases, the incentive isn’t there to self-disclose so the government sometimes intervenes with targeted transparency policies that compel disclosure. These often prompt the disclosure of missing information that might have bearing on public safety, the quality of services provided to the public, or issues of discrimination or corruption that might persist if the information weren’t available. Transparency policies like restaurant inspection scores or automobile safety tests have been quite effective, while nutrition labeling, for instance, has had limited impact on issues of health or obesity. Moreover, when the government compels transparency on itself, the results can be lacking. Consider the Federal Agency Data Mining Reporting Act of 2007,19which requires the federal government to be transparent about everything from the goals of data mining, to the technology and data sources used, to the efficacy or likely efficacy of the data mining activity and an assessment on privacy and the civil liberties it impacts. The 2012 report from the Office of the Director of National Intelligence (ODNI) reads, “ODNI did not engage in any activities to use or develop data mining functionality during the reporting period.”20Meanwhile, Edward Snowden’s leaked documents reveal a different and conflicting story about data mining at the NSA. Even when laws exist compelling government transparency, the lack of enforcement is an issue. Watchdogging from third parties is as important as ever. Oftentimes corporations limit how transparent they are, since exposing too many details of their proprietary systems (trade secrets) may undermine their competitive advantage, hurt their reputation and ability to do business, or leave the system open to gaming and manipulation. Trade secrets are a core impediment to understanding automated authority like algorithms since they, by definition, seek to hide information for competitive advantage.21Moreover, corporations are unlikely to be transparent about their systems if that information hurts their ability to sell a service or product, or otherwise tarnishes their reputation. And finally, gaming and manipulation are real issues that can undermine the efficacy of a system. Goodhart’s law, named after the banker Charles Goodhart who originated it, reminds us that once people come to know and focus on a particular metric it becomes ineffective: “When a measure becomes a target, it ceases to be a good measure.”22 In the case of government, the federal Freedom of Information Act (FOIA) facilitates the public’s right to relevant government data and documents. While in theory FOIA also applies to source code for algorithms, investigators may run into the trade secret issue here as well. Exemption 4 to FOIA covers trade secrets and allows the federal government to deny requests for transparency concerning any third-party software integrated into its systems. Government systems may also be running legacy code from 10, 20, or 30-plus years ago. So even if you get the code, it might not be possible to reconstitute it without some ancient piece of enterprise hardware. That’s not to say, however, that more journalistic pressure to convince governments to open up about their code, algorithms, and systems isn’t warranted. Another challenge to using transparency to elucidate algorithmic power is the cognitive overhead required when trying to explicate such potentially complex processes. Whereas data transparency can be achieved by publishing a spreadsheet or database with an explanatory document of the scheme, transparency of an algorithm can be much more complicated, resulting in additional labor costs both in the creation of that information as well as in its consumption. Methods for usable transparency need to be developed so that the relevant aspects of an algorithm can be presented in an understandable and plain-language way, perhaps with multiple levels of detail that integrate into the decisions that end-users face as a result of that information. When corporations or governments are not legally or otherwise incentivized to disclose information about their algorithms, we might consider a different, more adversarial approach.

Reverse Engineering

While transparency faces a number of challenges as an effective check on algorithmic power, an alternative and complementary approach is emerging based around the idea of reverse engineering how algorithms are built. Reverse engineering is the process of articulating the specifications of a system through a rigorous examination drawing on domain knowledge, observation, and deduction to unearth a model of how that system works. It’s “the process of extracting the knowledge or design blueprints from anything man-made.”23 Some algorithmic power may be exerted intentionally, while other aspects might be incidental. The inadvertent variety will benefit from reverse engineering’s ability to help characterize unintended side effects. Because the process focuses on the system’s performance in-use it can tease out consequences that might not be apparent even if you spoke directly to the designers of the algorithm. On the other hand, talking to a system’s designers can also uncover useful information: design decisions, descriptions of the objectives, constraints, and business rules embedded in the system, major changes that have happened over time, as well as implementation details that might be relevant.24,25 For this reason, I would advocate that journalists engage in algorithmic accountability not just through reverse engineering but also by using reporting techniques, such as interviews or document reviews, and digging deep into the motives and design intentions behind algorithms. Algorithms are often described as black boxes, their complexity and technical opacity hiding and obfuscating their inner workings. At the same time, algorithms must always have an input and output; the black box actually has two little openings. We can take advantage of those inputs and outputs to reverse engineer what’s going on inside. If you vary the inputs in enough ways and pay close attention to the outputs, you can start piecing together a theory, or at least a story, of how the algorithm works, including how it transforms each input into an output, and what kinds of inputs it’s using. We don’t necessarily need to understand the code of the algorithm to start surmising something about how the algorithm works in practice. Inputs Outputs Inputs Outputs (A) I/O Relationship Fully Observable (B) Only Output Observable Inputs Outputs Inputs Outputs (A) I/O Relationship Fully Observable (B) Only Output Observable Figure 1. Two black box scenarios with varying levels of observability. Figure 1 depicts two different black-box scenarios of interest to journalists reverse engineering algorithms by looking at the input-output relationship. The first scenario, in Figure 1(A), corresponds to an ability to fully observe all of an algorithm’s inputs and outputs. This is the case for algorithms accessible via an online API, which facilitates sending different inputs to the algorithm and directly recording the output. Figure 1(B) depicts a scenario in which only the outputs of the algorithm are visible. The value-added model used in educational rankings of teachers is an example of this case. The teacher rankings themselves became available via a FOIA request, but the inputs to the algorithm used to rank teachers were still not observable. This is the most common case that data journalists encounter: A large dataset is available but there is limited (or no) information about how that data was transformed algorithmically. Interviews and document investigation are especially important here in order to understand what was fed into the algorithm, in terms of data, parameters, and ways in which the algorithm is used. It could be an interesting test of existing FOIA laws to examine the extent to which unobservable algorithmic inputs can be made visible through document or data requests for transparency. Sometimes inputs can be partially observable but not controllable; for instance, when an algorithm is being driven off public data but it’s not clear exactly what aspect of that data serves as inputs into the algorithm. In general, the observability of the inputs and outputs is a limitation and challenge to the use of reverse engineering in practice. There are many algorithms that are not public facing, used behind an organizational barrier that makes them difficult to prod. In such cases, partial observability (e.g., of outputs) through FOIA, Web-scraping, or something like crowdsourcing can still lead to some interesting results." (


Wikileaks: Two Exploits Against Protocollary Power

Alison Powell:

" In its transformation from a group of hacktivists using the opportunities of the internet to let data speak truth to power, to a phenomenon that leveraged much of the internet and impacted the production of mass media, Wikileaks illustrates the new possibilities of protocol and resistance.

In their 2007 book The Exploit, Galloway and Thacker argue that the network is merely a condition of possibility for the operation of protocol, which can direct control around the network. Within the form of power that is protocol is the potential for an 'exploit' or disruption of protocol. The exploit is a property of the system, but it is also the thing that disrupts the system. This is one thing that WikiLeaks has effectively done; by identifying the logic of control underlying both secrets and the way that secrets have in the past and may now become news

The Wikileaks phenomenon displays a couple of exploits. First, the insertion of Wikileaks into the production of news exploited features of the journalism process, inserting a new intermediary into the process and potentially creating a kind of disruptive innovation in the production of journalistic content. This disruption is based on the permanence and reproducibility of internet data . Second, the response of Anonymous to attempts to shut down the WikiLeaks organization reiterated how the exploit is a central property of a system of protocol: despite the increasing state regulation and governance of the internet, it still to some degree operates based on principles of distributed power.

WikiLeaks became significant because of its lucky ability to exploit the news production process. Through the summer of 2010, internet scholars, security specialists and hacktivists gleefully discussed the tidbits of scandal and deluges of data that WikiLeaks released. This ranged from Sarah Palin’s e-mail to thousands of pages on the US involvement in Afghanistan. But these leaks were not effective at drawing attention to the secrets in the way that Assange may have initially intended, at least in his writings. Thus, the partnerships with news organizations became important in advancing his single purpose. It also contributed to the conflation of WikiLeaks as an organization with Assange as a character, as Anderson's piece in this collection describes in detail.

The partnerships, as they expanded beyond Assange's goals, began to attract significant attention to the leaks. Whereas the leaked information about Afghanistan was so voluminous that only a few media stories broke, the diplomatic cables were redacted by journalists working with large newspapers. As Sarah Ellison's Vanity Fair article describes in wincing detail, this partnership was awkward for Assange, for the other members of WikiLeaks, and especially for the various journalists and newspapers involved.

We can read this unique partnership with a leaker, a non-national holder of leaks, and the conventional mass media as an exploit of the news-making process, and an innovation in reporting. As Aaron Bady notes,

The way most journalists “expose” secrets as a professional practice — to the extent that they do — is just as narrowly selfish: because they publicize privacy only when there is profit to be made in doing so, they keep their eyes on the valuable muck they are raking, and learn to pledge their future professional existence on a continuing and steady flow of it. In muck they trust.

The partnership between the mass media journalists and WikiLeaks has provoked much reflection by the journalistic community. Like Ellison, many of them concentrate on the incongruity of negotiating with Assange, who often threatened to release underrated diplomatic cables if the process did not go as smoothly as he had hoped. This was exacerbated by the media attention and legal threats placed on Assange himself. However, what these reports have in common is description of a shifting role for the journalist: not necessarily collecting breaking news, but instead working to validate and contextualize raw data that has come from another source. In this way, WikiLeaks has exploited the network of news production and transformed it. Using the capacity of the internet to easily reproduce and maintain identical data, ithas created a unique and persistent digital repository for the type of information which in the past would have been provided to journalists by trusted sources.

With the complicity of newsrooms, WikiLeaks has created a significant new innovation in the way that international news is investigated and released. It has succeeded in having previously unavailable material put out in to the public domain. But it has not done this by maintaining a 'people power' wiki with every leak freely available. It has instead, through a combination of luck and strategy, added an innovation to the function of newsrooms strained by budget cuts. The publication of stories based on leaked cables continues even now, with a string of revelations appearing this week in India's national newspapers.

The WikiLeaks phenomenon also introduced a second exploit. This one was the not the result of any actions of Assange or WikiLeaks volunteers. It was the outcome of a perception by individual internet users that powerful government and corporate interests shouldn't use technical tools to shut down Wikileaks without some retaliation – of the same type, of course.

After the release of the diplomatic cables, Wikileaks experienced censure by the US government, as well as the suspension of its bank accounts. Yochai Benkler describes the events this way in a February 2011 working paper:

“Responding to a call from Senate Homeland Security Committee Chairman Joe Lieberman, several commercial organization tried to shut down Wikileaks by denial of service of the basic systems under their respective control. Wikileaks' domain name server provider stopped pointing at the domain “,” trying to make it unreachable. Amazon, whose cloud computing platform was hosting the data, cut off hosting services for the site. Banks and payment companies, like Mastercard, Visa, and PayPal, as well as the Swiss postal bank, cut off payment service to Wikileaks in an effort to put pressure on the site's ability to raise money from supporters around the world. It is hard to identify the extent to which direct government pressure, beyond the public appeals for these actions and their subsequent praise from Senator Joe Liberman, was responsible for these actions.”

This government pressure, although it was coupled with pressure from the US companies was not centrally coordinated. Instead, it took the form of an 'integrated, cross-system attack” in Benkler's terms. In response, another exploit took place. Individuals aligning themselves with online group identity Anonymous staged DDoS counter-attacks, succeeding in shutting down MasterCard and Paypay'swebsites temporarily. In a Foreign Policy article from December 2010, Evgeny Morozov likened denial of service attacks to “sit-ins” that are intended to disrupt institutions, temporarily. He considered the actions of Anonymous to be a kind of direct action against internet censorship, which is separate from the role of WikiLeaks as a whistle-blower with ties to non-governmental organizations and the mass media. In particular, the use of distributed denial of service attacks, which flood web sites with requests, is inconvenient from a technical point of view and requires resources to mitigate, but which is not inherently damaging. Following these retaliatory actions, thousands of individuals set up mirror sites of all of the content, defeating the purpose of cutting off access to the site. Both of these activities were undertaken not by action coordinated far in advance, but by thousands of individuals who could be mobilized quickly to act in the support of a shared principle

The response from Anonymous and other internet users is a reminder that the Internet is not structured the way a broadcaster is. It can be 'killed' in one country, but for the moment it remains a set of interlinked distributed networks, where data that can cheaply and easily be reproduced can be maintained for long periods of time, across national borders. Exploits of these networks don't only come from powerful actors. They can come from individuals and collectives, and may indicate a new role for the multitude." (

Key Books to Read

  • The Exploit, Alexander Galloway and Eugene Thacker
  • Frank Pascuale. The Black Box Society: The Secret Algorithms That Control Money and Information.


  • Code Space. Software and Everyday Life. Rob Kitchin and Martin Dodge. MIT Press, 2011 [2]: Examples of code/space include airport check-in areas, networked offices, and cafés that are transformed into workspaces by laptops and wireless access. Kitchin and Dodge argue that software, through its ability to do work in the world, transduces space. Then Kitchin and Dodge develop a set of conceptual tools for identifying and understanding the interrelationship of software, space, and everyday life, and illustrate their arguments with rich empirical material. And, finally, they issue a manifesto, calling for critical scholarship into the production and workings of code rather than simply the technologies it enables."

More Information

  1. "The WikiLeaks Phenomenon and New Media Power" by Alison Powell. The New Everyday April 8 2011.
  2. Gillespie, Tarleton L., The Politics of Platforms (May 1, 2010). New Media & Society, Vol. 12, No. 3, 2010. [3]
  3. Democratizing software: Open source, the hacker ethic, and beyond by Brent K. Jesiek. First Monday, volume 8, number 10 (October 2003), URL:
  4. The Immaterial Aristocracy of the Net
  5. Overview of architectures of control in the digital environment, by Dan Lockton.
  6. The Politics of Code in Web 2.0. By Ganaele Langlois, Fenwick McKelvey, Greg Elmer, and Kenneth Werbin. Fibreculture Journal, Issue 14. [4]
  7. How Commercial Social Networks Hinder Connective Learning‎
  8. Algorithmic Accountability of Journalists


  1. Kevin Slavin on the Algorithms that Shape Our World, mentions in particular the stock trading algorithms

Related Concepts

  1. Algorithmically–Defined Audiences
  2. Digital Sovereignty
  3. Occlusive Reality
  4. Protological Control
  5. Social Machines
  6. Value Sensitive Design