Digital Sovereignty

From P2P Foundation
Jump to navigation Jump to search

How is sovereignty exercized on the internet and in cyberspace?

History

By Harry Halpin in Mute magazine, the Immaterial Aristocracy of the Internet, at http://www.metamute.org/en/Immaterial-Aristocracy-of-the-Internet

Origins

"Although popular legend has it that the internet was created to survive a nuclear war, Charles Herzfeld (former director of DARPA, the Defence Advanced Research Projects Agency responsible for funding what became the internet) notes that this is a misconception.

In fact, the internet came out of our frustration that there were only a limited number of large, powerful research computers in the country, and that many research investigators who should have access to them were geographically separated from them.[9]

The internet was meant to unite diverse resources and increase communication among computing pioneers. In 1962, J.C.R. Licklider of MIT proposed the creation of a ‘Galactic Network’ of machines and, after obtaining leadership of DARPA, he proceeded to fund this project. Under his supervision the initial ARPANet came into being.

Before Licklider’s idea of the ‘Galactic Network’, networks were assumed to be static and closed systems. One either communicated with a network or one did not. However, early network researchers determined that there could be an ‘open architecture networking’ where a meta-level ‘internet working architecture’ would allow diverse networks to connect to each other, so that

they required that one be used as a component of the other, rather than acting as a peer of the other in offering end-to end service.

This concept became the ‘Network of Networks’ or the ‘internet’ – anticipating the structure of later social movements. While the internet architecture provided the motivating abstract concepts, it did not define at the outset a ‘scalable transfer protocol’ – a concrete mechanism that could actually move the bits from one network to another. Robert Kahn and Vint Cerf devised a protocol that took into account four key factors:

1. Each distinct network would have to stand on its own and no internal changes could be required to any such network to connect it to the internet.

2. Communications would be on a best effort basis. If a packet didn’t make it to the final destination, it would shortly be retransmitted from the source.

3. Black boxes would be used to connect the networks; these would later be called gateways and routers. There would be no information retained bythe gateways about the individual flows of packets passing through them, thereby keeping them simple and avoiding complicated adaptation and recovery from various failure modes.

4. There would be no global control at the operations level.

The solution to this problem was TCP/IP. Data is subdivided into ‘packets’ that are all treated independently by the network. Any data sent over the internet is divided into relatively equal size packets by TCP (Transmission Control Protocol), which then sends the packets over the network using IP (Internet Protocol). Each computer has an Internet Number, a four byte destination address such as 152.2.210.122, and IP routes the system through various black-boxes, like gateways and routers, that do not try to reconstruct the original data from the packet. At the recipient end, TCP collects the incoming packets and then reconstructs the data. This protocol, which allows large sections of the network to be removed, is the most powerful technological ancestor of the network form of organisation.

While the system is decentralised in principle, in reality it is a hybrid with centralised elements. The key assignment of IP addresses to individual machines (the mapping of domains like http://www.ibiblio.org to an IP address like 152.46.7.122) comes from a hierarchical domain name authority controlled by a centralised body, namely ICANN. Futhermore, this entire process relies on a small number of top-level name servers.

This is a system vulnerable to flaws in the protocols used to exchange domain name information, as exemplified by the Pakistani government’s recent blocking of YouTube. More radically democratic structures of digital sovereignty could probably prevent such blocking in the first place. Indeed, it is the historical origins and function of these bodies of digital sovereignty that need exploration.

The First Immaterial Aristocracy

Although the internet was started by DARPA as a military-funded research project, it soon spread beyond the rarefied confines of the university. Once news of this ‘Universal Network’arrived, universities, corporations, and even foreign governments began to ‘plug in’ voluntarily. The internet became defined by voluntary adherence to open protocols and procedures defined by internet protocols. The coordination of such world-spanning internet standards soon became a social task that DARPA itself was less and less capable and willing to administer. As more and more nodes joined the internet, the military industrial research complex seemed less willing to fund and research it, perhaps realising that it was slowly spinning out of their control. In 1984 the US Military split its unclassified military network, MILNET, from the internet. No longer purely under the aegis of DARPA, the internet began a political process of self organisationto establish a degree of autonomous digital sovereignty. Many academics and researchers then joined the Internet Research Steering Group (IRSG) to develop a long-term vision of the internet. With the academics and bureaucrats distracted, perhaps, the job of creating standards and maintaining the infrastructure fell into the hands of the hackers of the Internet Engineering Task Force (IETF). Unlike their predecessors, the hackers often did not possess postgraduate degrees in computer science, but they did have an intense commitment to the idea of a universal computer network.

The organisation of the IETF embodied the anarchic spirit of the hackers. It was an ad hoc and informal body with no board of directors, although it soon began electing the members of the Internet Architecture Board (IAB) – a committee of the non-profit Internet Society that oversees and ratifies the standards process of the net. However, the real actor in the creation of protocols was not the IAB or any other bureaucracy, but the Internet Engineering Task Force (IETF).The IETF credo, attributed to the first Chair of the IAB David Clark, is: ‘We reject kings, presidents, and voting. We believe in rough consensus and running code.’ True to its credo, the IETF operates by a radical democratic process. There are no official or even unofficial membership lists, and individuals are not paid to participate. Even if they belong to an organisation they must participate as an individual, and only participate voluntarily. Anyone may join, and ‘joining’ is defined only in terms of activity and contribution. Decisions do not have to be ratified by consensus or even majority voting, but require only a rough measure of agreement on an idea. IETF members prefer to judge an idea by actual implementation (running code), and arguments are decided by the effectiveness of practice. The structure of the IETF is defined by areas such as ‘Multimedia’ and ‘Security’ and then subdivided into Working Groups on particular standards such as ‘atompub’, the widely used Atom standard for syndication of web content. In these Working Groups most of the work of hashing out protocols takes place.

Groups have elected Chairs whose task is to keep the group on topic. Even within the always technical yet sometimes partisan debates, there are no formalities, and everyone from professors to teenagers are addressed by their first name. This kind of informal organisation tends to develop informal hierarchies, and these informal hierarchies are regarded as beneficial since they are composed usually of the most dedicated who volunteer the most of their time for the net: ‘A weekend is when you get up, put on comfortable clothes, and go into work to do your Steering Group work.’If the majority of participants in IETF feel that these informal hierarchies are getting in the way of practical work, then the chairs of Working Groups and other informal bureaucrats are removed by a voting process, which happened once to an entire clique of ‘informal leaders’ in 1992.The IETF is also mainly a virtual organisation since almost all communication is handled by email, although it does hold week-long plenary sessions three times a year which attract over a thousand participants, with anyone welcome. Even at these face-to-face gatherings, most of the truly groundbreaking discussions seem to happen in the still more informal ‘Birds of a Feather’ discussions. The most important product of these list-serv discussions and meetings are IETF RFCs ‘Request for Comments’, whose very name demonstrates their democratic practice. These RFCs define internet standards such as URLs (RFC 1945) and HTTP (RFC 3986).The IETF still exists and anyone can ‘join’ by simply participating in a list given on their homepage. The organisation of the IETF operates with little explicit financing, but many members are funded by their governments or corporate sponsors, nevertheless it is still open to those without financing.


The World Wide Web

One IETF participant, Tim Berners-Lee, had the vision of a ‘universal information space’ which he dubbed the ‘World Wide Web’.[11] His original proposal brings his belief in universality to the forefront:

We should work toward a universal linked information system, in which generality and portability are more important than fancy graphics extra facilities.[12]

The IETF, perhaps due to its own anarchic nature, had produced a multitude of incompatible protocols. While protocols could each enable computers to communicate over the internet, there was no universal format for the various protocols. Tim Berners- Lee had a number of key concepts:

1. Calling anything that someone might want to communicate with over the Internet a ‘resource’.

2. Each resource could be given a universal resource identifier (URI) that allowed it to be identified and perhaps accessed. The word ‘universal’ was used to ‘emphasize the importance of universality, and of the persistence of information.’

3. The idea of simplifying hypertext as the emergence of a human-readable format for data over the web, so any document could link to any other document.

These three principles formed the foundation of the World Wide Web. In the IETF, Berners-Lee, along with many compatriots such as Larry Masinter, Dan Connolly, and Roy Fielding, spearheaded development of URIs, HTML (HyperText Markup Language) and HTTP (HyperText Transfer Protocol). As Berners-Lee says, the creation of protocols was key to the web, ‘Since by being able to reference anything with equal ease,’ due to URIs, ‘a web of information would form’ based on

the few basic, common rules of ‘protocol’ that would allow one computer to talk to another, in such a way that when all computers everywhere did it, the system would thrive, not break down.[13]

In fact, the design of the web on top of the physical infrastructure of the internet is nothing but protocol.[14]

However, Berners-Lee was frustrated by the IETF, who in typically anarchic fashion, rejected his idea that any standard could be universal. At the time a more hierarchical file-research system known as ‘Gopher’ was the dominant way of navigating the internet. In one of the first cases of digital enclosure on the internet, the University of Michigan decided to charge corporate (but not academic and non-profit) users for the use of Gopher, and immediately the system became a digital pariah. Berners-Lee, seeing an opening for the World Wide Web, surrendered to the IETF and renamed URIs ‘Uniform Resource Locators’ (URLs). Crucially, he got CERN (the European Organisation for Nuclear Research) to release any intellectual property rights they had to the web, and he also managed to create running code for his new standard in the form of the first web browser. Berners-Lee and others served primarily as untiring activists, convincing talented hackers to spend their time creating web servers and web browsers, as well as navigating the political and social process of creating web standards. Within a year the web had spread over the world. In what might be seen as another historical irony, years before the idea of a universal political space was analysed by Hardt and Negri as ‘Empire’, hackers both articulated and created a universal technological space.


A Crisis in Digital Sovereignty

In the blink of an eye, adoption of the web skyrocketed and the immaterial aristocracy of the IETF lost control of it. Soon all the major corporations had a website. They sent their representatives to the IETF in an attempt to discover who the powerbrokers of the internet were, but instead found themselves immersed in obscure technical conversations and mystified by the lack of any formal body of which to seize control. Instead of taking over the IETF, corporations began ignoring it. They did this by violating standards in order to gain market adoption through ‘new’ features. The battle for market dominance between the two largest opponents, Microsoft and the upstart Netscape, was based on an arms race of features supposedly created for the benefit of web users. These ‘new features’ in reality soon led to a ‘lock-in’ of the web where certain sites could only be viewed by one particular commercial browser. This began to fracture the rapidly growing web into incompatible corporate fiefdoms, building upon the work but destroying the sovereignty of the IETF. Furthermore, the entire idea of the web as an open space of communication began to be challenged, albeit unsuccessfully, by Microsoft’s concept of ‘push content’ and channels, which in effect attempted to replicate television’s earlier hierarchical and one- way model on the internet.

Behind the scenes, the creators of the web were horrified by the fractures the corporate browser wars had caused in their universal information space. In particular, Tim Berners-Lee felt like his original dream had been betrayed by corporations trying to create their own mutually incompatible fiefdoms for profit. He correctly realised it was in the long-term interests of both corporations and web users to have a new form of digital sovereignty. With the unique but informal status Berners-Lee enjoyed as the ‘inventor of the Web’(although he freely and humbly admits that this was a collective endeavor), he decided to reconstitute digital sovereignty in the form of the World Wide Web Consortium (W3C).This non-profit organisation was dedicated to

leading the Web to its full potential by developing protocols and guidelines that ensure longterm growth for the Web.[15]

Corporations had ignored the IETF’s slow and impenetrable processes, so Berners-Lee’s moved from a model of absolute to representative democracy. The major corporations would understand and embrace this, and the web would harness their power while preserving its universality. With the rapid growth of the web, Berners-Lee believed that an absolute democracy based on informal principles could not react quickly enough to the desires of users and prevent corporations from fracturing universality for short term gain. Unlike the IETF, which only standardised protocols that were already widely used, the W3C would take a proactive stance to deploy standardised universal formats before various corporations or other forces could deploy them. Berners-Lee was made director for life of the W3C, and ultimately his decision remains final, constituting a sort of strange immaterial monarchy. Since Berners-Lee historically has not used his formal powers as director and approves what the membership says, his importance is minimised. The W3Cmarks the shift from the radical and open anarchy of the IETF to a more closed and representative system.


Digital Sovereignty Returns

W3C membership was open to any organisation, whether commercial, educational, governmental, for-profit ornot for profit. Unlike the IETF, membership came at a price. It would cost $50,000 for corporations with revenues in excess of $50 million, and $5,000 for smaller corporations and non-profits. It was organised as a strict representative democracy, with each member organisation sending one member to the Advisory Committee. However, in practice it allowed hacker participation by keeping its lists public, and allowing non-affiliated hackers to join its Working Groups for free under the ‘Invited Expert’ policy. By opening up a ‘vendor neutral’ space, companies previously ‘interested primarily in advancing the technology for their own benefit’ could be brought to the table. This move away from the total fiscal freedom of the IETF reflected the increasing amount of money at stake in the creation of protocols, and the money needed to run standards bodies. Rather shockingly, when the formation of theW3C was announced both Microsoft and Netscape agreed to join. As a point of pride, Netscape even paid the full $50,000 fee, though they weren’t required to.

Having the two parties most responsible for fracturing the web at the table provided the crucial break through for the W3C. It allowed them to begin standardisation of HTML in a vendor neutral format that would allow web pages to be viewed in any standards compliant browser. Berners-Lee’s cunning strategy to envelop the corporations within the digital sovereignty of the W3C worked:

The competitive nature of the group would drive the developments, and always bring everyone to the table for the next issue. Yet members also knew that collaboration was the most efficient way for everyone to grab a share of a rapidly growing pie.[16]

The original universal vision of the web was inscribed into W3C mission statement: to expand the reach of the web to ‘everyone, everything, everywhere’. Other standards that have been widely used, such as XML, have come out of the W3C. However, with the web growing rapidly in the era of ‘web 2.0’, the W3C itself is seen as slow and unwieldy with a political process too overwhelmed by corporate representatives. With Google’s rise to its new hegemonic position as the premier search engine, the web is increasingly centred around this highly secretive organisation, reminiscent of Microsoft’s monopolisation of the personal computer. Key members of the IAB and other protocol boards like Vint Cerf are also Google employees.

One example of this new political terrain is social networking. The primary way most new users interact with the web is currently torn between Facebook and MySpace, heavily associated with Microsoft and Google respectively. Users and developers for these services are increasingly tired of their data being hoarded by these companies in closed data silos. DataPortability.org represents an effort to open the data, a more anarchic body that may signal a return to the heavily decentralised governance typical of the IETF. In its latest redesign of HTML, the W3C has tried to open itself to a more IETF-like radically democratic process, allowing hundreds of unaffiliated hackers to join for free. The next few years will determine whether the web centralises under either Google or Microsoft, or if the W3C can prevent the next digital civil war. The immaterial aristocracy is definitely changing, and its next form is still unclear. Perhaps, in step with the open and free software movements,as the level of self-organisation of web developers and even users grows and they become increasingly capable of creating and maintaining these standards themselves, the immaterial aristocracy will finally dissolve.


Future

Harry Halpin:

"This inspection of the social forms, historical organisation, and finances ofthe protocol-building bodies of the net is not a mere historical excursion. It has consequences for the concrete creation of revolutionary collectivity in the here and now. Many would decry the very idea that such collectivity can be developed through the net as utopian. In the face of imperialist geopolitics masquerading behind the war on terror and rampant accompanying paranoia, such a utopian perspective is revolutionary. Clearly, a merely utopian perspective is not enough, it needs to be combined with concrete action to move humanity beyond capital. One critique of Michael Hardt and Antonio Negri’s concept of ‘the multitude’ as the new networked revolutionary agent is that its proponents have no concrete plan for bringing it from the virtual to the actual. Fashionable post-autonomism in general leaves us with little else but utopian demands for global citizenship and social democratic reforms such as guaranteed basic income. An enquiry into the immaterial aristocracy can help us recognise the social relations that determine the technological infrastructure which enables the multitude’s social form, while not disappearing into ahistoricism.

The technical infrastructure of the web itself is a model for the multitude:

The internet is the prime example of this democratic network structure. An indeterminate and potentially unlimited number of interconnected nodes communicate with no central point of control, all nodes regardless of territorial location connect to all others through a myriad of potential paths and relays.[17]

Our main thesis is that the creation of these protocols which comprise the internet was not the work of sinister forces of control, but the collective work of committed individuals, the immaterial aristocracy. What is surprising is how little empirical work has been done on this issue by political revolutionaries – with a few notable exceptions such as the anarchist, Ian Heavens. Yet the whole development of the internet could easily have turned out otherwise. We could all be on Microsoft Network, and we are dangerously close to having Google take over the web. One can hear the echo of Mario Tronti’s comments on the unsung struggles of the working class:

[…] perhaps we would discover that ‘organisational miracles’ are always happening, and have always been happening.[18]

The problem is not that ‘the hardest point is the transition to organisation’ for the multitude.[19] The problem of the hour is the struggle to keep the non-hierarchical and non-centered structure of the web open, universal, and free so as to further enable the spread of new revolutionary forms of life – although the cost is the continual spread of capital not far behind. The dangers of a digital civil war are all too real, with signs ranging from the great firewall of China, the US military plans revealed in their Information Operation Roadmap to ‘fight the net as it would a weapons system’, to the development of a multi-tier net that privileges the traffic of certain corporations willing topay more, in effect crippling many independent websites and file-sharing programs. Having radicals participating in open bodies like the W3C and IETF may be necessary for the future survival of the web.

There is no Lenin in Silicon Valley, plotting the political programme of the network revolution. The beauty of the distributed network is that it makes the very idea of Lenin obsolete. Instead of retreating into neo-surrealism as The Exploit does, revolutionaries should be situationists, creating situations in which people realise their own strength through self-organisation. These situations are created not just by street protests and struggles over precarious labour, but through technical infrastructure. One example par excellence would be how the internet enabled the communication networks that created the ‘anti-globalisation’ movement. Of course, nets are not synonymous with revolution or even anti-capitalism, as the use of the net by corporations and governmental bodies to coordinate globalisation far outweighs its use by the ‘anti-globalisation’ movement. Still, given the paucity of any alternative put forward by Galloway and Thacker, the thesis that the very nature of protocol is inherently counter revolutionary seems to be a theoretical dead end. It would be more productive to acknowledge that political battles around net protocols are increasingly important avenues of struggle, and thebest weapon in this battle is history. A historical understanding of the protocols of the net can indeed lead to better and more efficient strategic interventions.

‘Hackers’ and net artists’ struggles against protocol are not the only means of liberation. The vast majority of these interventions are unknown to the immaterial aristocracy and those outside the circles of ‘radical’ digerati. Instead, we should see the creation of new protocols as a terrain of struggle in itself. The best case in point might be the creation of the Extensible Messaging and PresenceProtocol, which took instant messaging out of the hands of private corporations like AOL and allowed instant messaging to be implemented in a decentralised and open manner. This in turn allowed secure technologies like ‘Off-the-Record’ instant messaging to be developed, a technology that can mean the difference between life and death for those fighting repressive regimes. This protocol may become increasingly important even in Britain, since it is now illegal to refuse to give police private keys for encrypted email. These trends are important for the future of any revolutionary project, and the concrete involvement of radicals in this particular terrain of struggle could be a determining factor in future of the net. Protocol is not only how control exists after decentralisation. Protocol is a how the common is created in decentralisation, another expression of humanity’s common desire for collectivity." (http://www.metamute.org/en/Immaterial-Aristocracy-of-the-Internet)

More Information

  1. Protocollary Power