P2P and Human Evolution Ch 2

From P2P Foundation
Jump to: navigation, search

2. P2P as the Technological Framework of Cognitive Capitalism[1]

Chapter 2 of P2P and Human Evolution

2.1.A. Defining P2P as the relational dynamic of distributed networks

Alexander Galloway in his book Protocol makes an important and clear distinction between centralized networks (with one central hub where everything must pass and be authorized, as in the old telephone switching systems), decentralized systems, with more than one center, but these subcenters still being authorative (such as the airport system in the U.S. centered around hubs where planes must pass through), from distributed systems, where hubs may exist, but are not obligatory (such as the internet). In distributed networks, participants may freely link with each other, they are fully autonomous agents. Hence the importance to clearly distinguish between our usage of the concepts 'decentralized' vs. 'distributed'. Peer to peer is specifically the relational dynamic that arises in distributed networks.

So: what is peer to peer? Here’s a first tentative definition: It is a specific form of relational dynamic, is based on the assumed equipotency of its participants,[2] organized through the free cooperation of equals in view of the performance of a common task, for the creation of a common good, with forms of decision-making and autonomy that are widely distributed throughout the network.

P2P processes are not structureless, but are characterized by dynamic and changing structures which adapt themselves to phase changes. It rules are not derived from an external authority, as in hierarchical systems, but generated from within.. It does not deny ‘authority’, but only fixed forced hierarchy, and therefore accepts authority based on expertise, initiation of the project, etc… P2P may be the first true meritocracy. The threshold for participation is kept as low as possible. Equipotency means that there is no prior formal filtering for participation, but rather that it is the immediate practice of cooperation which determines the expertise and level of participation. Communication is not top-down and based on strictly defined reporting rules, but feedback is systemic, integrated in the protocol of the cooperative system. Techniques of 'participation capture' and other social accounting make automatic cooperation the default scheme of the project. Personal identity becomes partly generated by the contribution to the common project.

P2P is a network, not a hierarchy (though it may have elements of it); it is 'distributed', though it may have elements of centralization and 'decentralisation'; intelligence is not located at any center, but everywhere within the system. Assumed equipotency means that P2P systems start from the premise that ‘it doesn’t know where the needed resource will be located’, it assumes that ‘everybody’ can cooperate, and does not use formal rules in advance to determine its participating members. Equipotency, i.e. the capacity to cooperate, is verified in the process of cooperation itself. Validation of knowledge, acceptance of processes, are determined by the collective. Cooperation must be free, not forced, and not based on neutrality (i.e. the buying of cooperation in a monetary system). It exists to produce something. It enables the widest possible participation. These are a number of characteristics that we can use to describe P2P systems ‘in general’, and in particular as it emerges in the human lifeworld. Whereas participants in hierarchical systems are subject to the panoptism of the select few who control the vast majority, in P2P systems, participants have access to holoptism, the ability for any participant to see the whole. Further on we will examine more in depth characteristics such as de-formalisation, de-institutionalisation, de-commodification, which are also at the heart of P2P processes.

Whereas hierarchical systems are based on creating homogeneity amongst its 'dependent' members, distributed networks using the P2P dynamic regulate the 'interdependent' participants preserving heterogeneity. It is the 'object of cooperation' itself which creates the temporary unity. Culturally, P2P is about unity-in-diversity, it is concrete 'post-Enlightenment' universalism predicated on common projects; while hierarchy is predicated on creating sameness through identification and exclusion, and is associated with the abstract universalism of the Enlightenment.

To have a good understanding of P2P, I suggest the following mental exercise: think about these characteristics, then about their opposites. So doing, the radical innovative nature of P2P springs to mind. Though P2P is related to earlier social modes, those were most in evidence in the early tribal era, and it now emerges in an entirely new context, enabled by technologies that go beyond the barriers of time and space. After the dominance during the last several millennia, of centralized and hierarchical modes of social organisation, it is thus in many ways now a radically innovative emergence, and also reflects a very deep change in the epistemological and ontological paradigms that determine behaviour and worldviews.

An important clarification is that when we say that peer to peer systems have no hierarchy or are not centralized, we do not necessarily mean the complete absence of such characteristics. But in a P2P system, the use of hierarchy and centralization, serve the goal of participation and many-to-many communication, and are not used to prohibit or dominate it. This means that though P2P arises in distributed networks, not all distributed networks exhibit P2P processes. Many distributed bottom-up processes, such as the swarming behaviour of insects, of the behaviour of buyers and sellers in market, are not true P2P processes, to the degree that they lack holoptism, and do not promote participation. P2P, as a uniquely human phenomenom integrates moral and intentional aspects. When distributed meshworks, for example interlinking boards of directors,[3] serve a hierarchy of wealth and power, and are based on exclusion rather than participation, this does not quality as a full P2P process.

P2P can be a partial element of another process; or it can be a full process. For example, the technological and collaborative infrastructure build around P2P principles, may enable non-P2P processes. In the example just above it is the infrastructure of Empire, but it can also enables new types of marketplaces,[4] gift/sharing economy practices. Where P2P is a full process, we will argue that it is a form of communal shareholding producing a new type of Commons.

2.1.B. The emergence of peer to peer as technological infrastructure

But how does all of the above it apply to technology?

In this and the next section, I will attempt to describe two related aspects. One is that P2P-formatted technologies are now the very infrastructure of business processes. Second, that the new technologies of communication being created are in fact an alternative communication infrastructure that in part transcends the state and corporate control of traditional one-to-many mass media. It is also emerging as a infrastructure of the free cooperation of autonomous agents.

This is not to say that the new infrastructure is not controlled 'at all', that corporate forces are not at work in it, but means that we cannot be blind to its radical potential, and radical 'actuality' neither. Here as in the other sections we will see how P2P is at the same time the very basis of the system, while also significantly transcending it. While there is no direct cause to effect link between peer to peer technology, as such, and the peer to peer relational dynamic which is the topic of our research. Peer to peer technology is more likely than centralized technology to be a technological basis for enabling and supporting peer to peer human relationships. This relationship is borne out by our description of the use of P2P technologies to create an alternative peer-based communications infrastructure (2.1.C), as well as for an infrastructure of human cooperation (2.1.D.).

The internet, as it was conceived by its founders (Abbate, 1999), and evolved in its earliest format, was a point to point network, consisting of equal networks, and the travel of data uses different sets of resources as necessary. It is only later, after the rise of stronger and weaker networks, of open, semi-closed and closed networks, that the internet became hybrid, but it still in essence functions as a distributed network, having no central core to manage the system. Its hierarchical elements, such as the layered internet protocol stack (though specifically designed to allow P2P processes), the domain name system (a decentralized system of authorative servers which can disconnect participants, you can't arrive at an address without DNS intervention), or internet governance bodies,[5] do not prohibit many-to-many communication and participation, but enable it. The evolution of the internet is largely seen to be 'organic' rather than centrally directed, no single central player can direct it, though some players are more influential than others.

The web similarly was seen as a many-to-many publishing medium, even though it follows a semi-hierarchical client-server model (hence decentralized rather than distributed). However, it is still and will remain a essentially participative medium allowing anyone to publish their own webpages. Because of its incomplete P2P nature, it is in the process of becoming a true P2P publishing medium in the form of the Writeable Web projects, that allow anyone to publish from his own or any other computer,in the form of blogging etc… Other P2P media are instant messaging, chat, IP telephony systems, etc.. For the internet and the web, P2P was not yet explicitly theorized (though the idea of a network of networks was), they are weak P2P systems in that they only recognize ‘strong’ members, DNS-addressed computers in the internet, servers in the case of the web. In the systems developed afterwards, P2P was explicitly theorized: they are ‘strong’ P2P systems, in which all members, also the weak members (without fixed DNS address for the internet, blogs with permalinks in case of the web) can participate.

Filesharing systems were the first to be explicitly tagged with the P2P label, and this is probably the origin of the concept in the world of technology. In such systems, all voluntary computers on the internet are mobilized to share files amongst all participating systems, whether that be documents, audiofiles, or audiovisual materials. In June 2003, videostreaming became the internet application using the largest bandwidth, and some time before, online music distribution had already surpassed the physical distribution of CD’s (in the U.S.). Of course, in the public mind filesharing is mostly associated with the sharing of piracy of copyrighted music and videos.[6] Though the earliest incarnations of these P2P systems still used centralized databases, they are now, largely thanks to the efforts of the music industry,[7] mostly true P2P systems, in particular Bittorrent and the planned development of Exeem. Each generation of P2P filesharing has been more consistent in its applications of peer to peer principles.[8]

Finally, grid computing uses the P2P concept to create ‘participative supercomputers’, where resources, spaces, computing cycles can be used from any participant in the system, on the basis of need. It is generally seen as the next paradigm for computing. Even programming now uses the P2P concept of object-oriented programming, where each object can be seen as a node in a distributed network.

All of the above clearly shows that the new format of our technological infrastructure, which lies at the basis of basic and economic processes, follow the P2P design. This infrastructure enables the interlinking of business processes, beyond the borders of the individual factory and company, and the interlinking of all the individuals involved. Soon, and perhaps it is already the case today, it will be justified to claim that without P2P-formatted technologies, it will be impossible to carry out production and all the related economic mechanisms.

I could go on, but what should emerge in your mind, is not a picture of a series of marginal developments, but the awareness that P2P networks are the key format of the technological infrastructure that supports the current economic, political and social systems. Companies have used these technologies to integrate their processes with those of partners, suppliers, consumers, and each other, using a combination of intranets, extranets, and the public internet, and it has become the absolutely essential tool for international communication and business, and to enable the cooperative, internationally coordinated projects carried out by teams. As we will see in our full review on the emergence of P2P practices across the social field, an emphasis on business and economic processes would be very one-sided. Politics, culture, and science are equally changed by distributed practices enabled by the new technological infrastructure. Examples are the grown of massive multi-authorship in different scientific fields, with hundreds of people involved in research projects, and the distributed use of scientific instruments, such as arrays of small radio telescopes.

On the other hand, P2P systems are not just the outcome of plans of the establishment, but are the result of the active intervention of consumers avid for free access to culture, of knowledge workers actively working to find technical solutions for their needed cooperative work, and of activists consciously working for the creation of tools for an emerging participative culture.[9] P2P is both 'within' and 'beyond' the current system.

2.1.C. The construction of an alternative media infrastructure

Distributed technological networks are the most important infrastructure for cognitive capitalism. But as a communication infrastructure, the dominant transnational corporations could for a long time rely on their own private telecommunication networks. The internet has radically democratized access to this kind of infrastructure, to everyone with access to a computer. Similarly, for its cultural hegemony, the dominant social system has relied on "one to many" broadcasting system, which require a heavy capital outlay, and are controlled by monopolistic corporate interests, in charge of 'manufactured consent', and in other countries, by the state itself. The stranglehold of corporate media is such, including its hold on our very psyches (we 'think like television' even when we've not been watching it for years). It has become all but impossible for any social minority (except religious and ethnic groups which can marshall vast resources themselves) to have its voice heard. Media reform seems definitely beyond reach. However, though the internet is also characterized by a certain commercial exploitation, and by very strong commercial entities such as Yahoo, as a whole, and as a distributed network, it is not owned nor controlled by commercial entities, but by a network of various entities: commercial, governmental, nongovernmental, etc... It contains the historical promise of an 'alternative information and communication infrastructure', a many to many, bottom-up resource that can be used by various social forces. Mackenzie Wark, in his Hacker Manifesto, distinguishes the producers of immaterial use value, from the owners of the vectors of information, without whom no exchange value can be realized. The promise of the internet is that we now have a vector of information production, distribution and exchange, that functions at least partly outside of the control of what he calls the 'vectoralist' class. The situation seems to be the following, and we use the distinctions drawn up by Yochai Benkler in his "The Political Economy of the Commons" essay. The physical layer, networks, and communication lines, are widely distributed between commercial, state, and academic interests, with no single player or set of players dominating, and the computers themselves are widely in the hands of the public and civil society. The logical layer, especially TCP/IP, and increasingly the various aspects of the read/write Web, the filesharing protocols are still systematically rigged for participation. The content layer, is on the one hand subject to an increasingly harsh intellectual property regime, but, commercial players are themselves subject to the logic of the economy of attention and the Wisdom Game, dictating policies of information sharing and giving, in order to get the attention. Next to the commercial portals, which may or may not play a nefarious role, the public is widely enabled to create its own content, and has been doing so by the millions. While part of the previously existing Information Commons or public domain is disappearing, other parts are being continuously constructed through the myriad combined efforts of civil society users.

This process is in full swing and is what we attempt to describe in this section. Below, I reproduce an adapted version of a diagram from Hans Magnus Enzensberger, which outlines the difference between 'repressive' and 'emancipatory' media. Without any doubt, the emerging alternative media infrastructure has an overwhelming number of characteristics of being an 'emancipatory' medium:

  1. it is based on distributed programming (not just a few);
  2. each receiver is a potential transmitter (not just a few broadcasters);
  3. it has mobilization potential (it doesn't generate passivity);
  4. it is characterized by interaction and self-production;
  5. it enables a political learning process; it allows collective production by equipotent participants;
  6. the social control is effected through self-organisation.

Just compare this list to the characteristics of corporate television! Thus, the historical importance of these developments seems overwhelmingly clear. This does not mean that the alternative internet media infrastructure automatically leads to emancipation, but that it can certainly enable political processes in that direction.

Let us now summarise these developments in technical terms. In terms of media, the broadband internet is rapidly mutating to enhance the capacities to create distributed online publishing in the form of the Writeable Web[10] (also called read-write web) and blogging[11] in particular; the distribution of audio programming is possible through internet radio and various audioblogging developments such as podcasting[12] (audio content, music or video distribution through iPod or MP3 players), and other types of 'time-shifted radio' such as mobcasting[13] ('casting' to mobile phones), and even Skypecasting (using the popular Voice over Internet Telephony software Skype,[14] but for broadcasting purposes, especially internet radio programs). Audiovisual distribution, which we can call public webcasting[15] as it incorporates both audio and video, is possible through the emerging video blogging (vlogging),[16] but mostly through broadband P2P filesharing systems such as Bittorrent[17] and Exeem,[18] now already responsible for the majority of internet traffic.[19] While Exeem is still in development at the time of writing this paragraph (June 2005), Bittorrent is considered to be a major innovation making easy broadband-based audiovisual distribution all but inevitable. A wide variety of associated services is being developed by small companies or cooperative groups to assist citizens in their own production of audiovisual material.[20] What these services such as Common Bits and the Broadcast Machine do, is to transform Bittorrent technology into a internet broadcasting platform that can be used by common users without expert technical knowledge.

All these developments taking together mean that the creation of an alternative information and communication infrastructure, outside of the control and ownership of the state and corporate-based one-to-many broadcasting systems, is well under way. These developments are not the product of a conscious activist strategy as the one proposed by Mark Pesce and practiced by players such as Indymedia, but it also to a large degree the natural outgrowth of the empowerment of the users, who, whenever they by a WiFi hub, or install Skype for personal usage, or any other natural act of ameliorating their own connectivity, are building this alternative infrastructure, from the edges onward, step by step, and this is also why it seems quite unstoppable.[21] In a sense, this is another example of the 'production without a manufacturer' or 'the supply-side supplying itself, explained in 3.1.A (and notes).

These technological developments form the basis for a new practice of citizen-produced 'journalism'[22] or 'reporting' by a 'self-informing public'[23] centered around the phenomemon of blogs, and augmented by the other techniques we have been discussing.[24] See the example of the Korean OhMyNews,[25] working with 35,000 citizen reporters and 40 staff members, as an example of a new type of hybrid journalism. These developments are a new vehicle for the production of 'public opinion', for the creation, expression, distribution and sharing of knowledge. And it is both supplementing and competing with the traditional mass media vehicles that used to mold public opinion.[26] It represents an important opportunity to distribute views that fall outside the purview of 'manufactured consent'. Clay Shirky has called it a 'process of mass amateurisation',[27] an analysis that is related to my own concept of 'de-institutionalisation', a key aspect of peer to peer process which I discuss in 3.3.C.

All this outpouring of expression, news and commentary is interlinked in a blogosphere, which has developed its own techniques to distill what is important, from what is less important. Similar with the broadcast model is that the blogosphere still has hubs and connectors drawing large crowds, but different is that it creates the possibility of a "long tail". This means that whereas in the broadcast world the distribution curve bottoms out at the end, with no resources left for minority interests, in P2P media, this bottoming out does not occur (the curve flattens before reaching the bottom), because the possibility exists of creating thousands upon thousands of micro-communities, organized by affinity. David Weinberger, focusing on the role of the blog for the individual, says it is 'an expression of 'the self in conversation',[28] that is available as a permanent record (through the innovation of permalinks, which create a fixed and permanent URL for every entry, unlike webpages which were always subject to change and disappearance). A crucial innovation for the spread of blogs has been the development of RSS feeds,[29] i.e. Really Simple Syndication, which allows internet users to 'subscribe' to any blog they like, and to manage the totality of their feeds through their email, RSS reader software, or online sites like Bloglines. Related to the emergence and growth of the blogosphere, is the growth in self-publishing, no longer the domain of dejected authors, but becoming a first choice for many who desire to reach a public directly without the traditional publisher intermediaries.[30]

Therefore, in physical terms, for the evolving telecommunications infrastructure, the broadcast model is being replaced by the ‘meshwork system’, which is already used by the Wireless Commons movement[31] to create a worldwide wireless communications network that aims to bypass the Telco infrastructure.[32] Several local governments aim to aid such a process.[33] For Yochai Benkler,the development of a "Open physical layer" based on open wireless networks, the so-called Spectrum Commons, is a key precondition for the existence of a "Core Common Infrastructure".

In such a system a wide array of local networks is created at very low cost, while they are interlinked with ‘bridges’. The technical breakthrough making this possible is the invention of Viral Communicators,[34] or meshworks of cooperating devices that do not need an infrastructure or a backbone but themselves create the network through their excess capacity. Communication on these networks follows a P2P model, just like the internet. Mark Pesce has already developed a realistic proposal to build an integrated alternative network within ten years,[35] based on similar premises, and with the additional concept of developing a 'Open Source TV Tuner'[36] which he predicts will completely overturn traditional broadcasting. (The same technology could also be used for phone calls, once hybrid WiFi phones are available.[37]) He has developed serious arguments about why 'netcasting' is not only economically feasible, but superior to the broadcasting model.[38] There are also already commercial versions of ‘file-serving television’ models such as the one pioneered by TiVo [39] as well as the different plans involving TV over Internet Protocol.[40] "Radio Your Way" is a similar, though less popular, application for radio[41] and there is a similar broad array of internet radio developments.[42] Telephony using the Internet Protocol,[43] recently popularized by Skype, is similarly destined to overcome the limitations of the hitherto centralized telephone system. P2P is generally seen as the coming format of the telecommunication infrastructure, even by the industry itself, and confirmed by my own former experience as strategic planner in that industry. British Telecom has declared that by 2008, the entirety of its network will have been converted to TCP/IP protocols.

While mobile telephony is strongly centralized and controlled, it will have to compete with wireless broadband networks, and users are busily turning it into yet another participative medium, as described by Howard Rheingold in Smart Mobs.

In the above phenomenology of P2P, notice that I have taken an extreme literal definition of P2P, as many hybrid forms exist, but the important and deciding factor is: does it enable the participation of equipotent members? One of the key factors is: how inclusionary is the social practice, or technology, or theory ,or any other manifestation of the P2P ethos.

These developments almost certainly mean that a new format of distribution and consumption is arising. At stake is the eventual unsustainability of the current TV broadcast model, in which the TV stations sell their audiences to advertisers, because they control the audience and the distribution of the programs. In the new form of distribution, in which users themselves take control of the choice and timing of the programs, because of the easy replication throughout the internet, both disintermediation and re-intermediation occur. The "hyperdistribution" of audiovisual material, think about the millions already downloading movies and TV programs, creates a direct link between producers and consumers. However, the economy of attention suggests process of re-intermediation. But as we have seen in the blogosphere for printed content, this process can be undertaken by clever algorithms and protocols and reputation-based systems, coupled with processes of viral diffusion of recommendations in affinity groups, and do not necessarily mean commercial portals or intermediaries. In a upcoming book, Mark Pesce has coined the concept of 'hyperpeople' to describe the new generation of techno-savvy youngsters who are already living this new reality, and as the technology becomes increasingly easier to use, it will be spreading throughout the population. And of course, it is not just a new form of consumption, there are also changes at the producer side, with audiences becoming themselves the producers of audiovisual material, as we can see in the growth of podcasting programs. Two consequences flow from this. First, the generalization of the phenomemom of the "Long Tail",whereby minority audiences are no longer constrained by the 'lowest common-denominator' mass media and mass marketing logic; and we can expect a flowering of creativity and self-expression. Second, the possibility of new majorities of taste and opinion forming, outside of the constraints of the mass production of unified corporate taste. As we expect from the playing out of P2P processes, we see both a strengthening of personal autonomy and a new type of collectivity. For some time now, we have seen democracies bypass majority opinions and the development of hypermanipulation. The hope is that techno-social developments are creating the possibility of a new balance of power, a 'second superpower' of global public opinion that is more democratic in character.

To judge the progress or regress of these efforts, we should look at developments in the physical layer of the internet: who owns and controls it, at present a wide variety of players, with a key role for the public and civil society who own the computers which are in fact the intelligent core of the internet; the logical layer or protocols, which pits closed systems against open systems in a continious conflict; and the content layer, which pits the free creation of an Information Commons against permanent attempts to strengthen restrictive intellectual property rights. According to Yochai Benkler, what we need is a Core Commons Infrastructure, which would consist of

  • an open physical layer in the form of open wireless networks, a 'spectrum commons'
  • an open logical layer, i.e. systematic preference for open protocols and open platforms
  • an open content layer, which means the roll back of too restrictive IP laws geared to defend business monopolies and stifle the development of a free culture

Let's conclude by assessing the current 'techno-social' state of progress of such an alternative infrastructure:

  • Bittorrent , Exeem, and other software programs enable broadband peercasting
  • Viral diffusion exists to circulate information about programming

What needs to be built is:

  • a meshwork of netcasting transmitters, as proposed by Mark Pesce
  • user-friendly desktop software, to manage content (Pesce's Open Tuner proposal)
  • better social mechanisms to select quality into such an alternative framework
Figure – Repressive Media vs. Emancipatory Media

Repressive Media

Emancipatory Media

Centrally controlled programming

Distributed programming

One transmitter, many receivers

Each receiver potentially a transmitter

Immobilisation of isolated individuals

Mobilisation potential

Passive consumers

Interaction and self-production


Political learning process

Production by specialists

Collective production

Control by property owners or the state

Social control through self-organisation

Source: Hans Magnus Enzensberger. Video Culture. Peregrine Smith, 1986, pp. 110-111

2.1.D. P2P as a global platform for autonomous cooperation

We have described peer to peer as the technological infrastructure of cognitive capitalism, and as an alternative information and communications infrastructure. But is also emerging as much more than that: as a whole set of enabling technologies that allow global affinity groups to work and create value together on a autonomous basis.

Let's quickly review what we have already seen, but in this new context.

As a technological infrastructure we have seen how grid computing can function as a way to combine untapped resources that lay dormant throughout the network. Since human processing power is inherently slower than computer processes, no single user uses his resources to the full, and this capacity can now be combined in common projects. Using this methodology any community can now mobilize vast 'super-computing'-like resources. Filesharing is also an example of the same ability, which can be extended to any meshwork of devices that can be connected. Resources that can be shared are computer processing power, memory storage, any content located on any participating computer, and collective monitoring through all kinds of interconnected sensors. The key role in this systems is that any participant automatically becomes a provider. Any user of the Skype telephony networks also offers his PC as a resource for the network, as does any filesharer, or user of Bittorrent. This obligatory participation can be generalized because it comes at no extra cost to the owner of the technological resource.

As a information and communications infrastructure it enables any group to communicate and create online knowledge collectives and to become a publisher. A combination of the open source infrastructure consisting of the Linux operating system, the Apache web server, the MySQL database system, the PHP publishing system (the four together are grouped together under the concept of the LAMP[44] infrastructure) combined with BitTorrent, allow for full-scale broadband multimedia webcasting. In addition, self-publishing, i.e. the publishing of fully-fledged print or e-books through printing-on-demand systems that do not require the intermediation of a formal publishers, is rapidly becoming an accepted means of distributing books, and it is practiced even by established authors, when they want to reach specialized audiences that are not of interest to a traditional publisher.

Social mobile computing enables dispersed groups to act in a coherent fashion, it is a powerful agent of mobilization. Such mobile or non-mobile networks are also known as 'group forming networks' since they enable the formation of subgroups. All kinds of social software[45] has been developed to enable the emergence and management of webs of cooperation which go beyond information sharing. Amongst these are the various forms of social networking software that are based on the theory of 'six degrees of separation' which says that anybody in the world is connected to anybody else through no more than 6 steps. Friends-of-a-friend software is a fast growing segment. These type of software is often coupled to 'presencing' software which allow you to know, who is also visiting your webpage, whom of your friends is available for instant messaging, and mobile proximity alert services[46] which tell you if one of your associates is close by, by using 'geo-location' services such as GPS (Global Positioning Systems).

A crucial ingredient are the social accounting tools, which allow anyone to judge and know about the degree of participation and trustworthiness of other members of the network, through communal validation processes. Similar in intent are formal ratings systems, such as the one used by Amazon to rate books, often used to gauge the reputation and trustworthiness (eBay, Slashdot's karma system).

Automatic referral systems or recommendation systems look to like-minded users by presenting each other's tastes, a system also used by Amazon. Google's success in present the most appropriate results is to a large degree the result of its decision to rank any resource accoding to the 'collective wisdom' of web users, i.e. calculating the pointers from other webpages. The latter are called 'implicit' referral systems since they do not require any conscious decision by users. Sites are learning to use the collective judgment of their participants through opinion sites (Epinion), through social bookmarking sites, with collective online publishing systems such as Slashdot and KuroShin using self-evaluation ratings.

A number of companies such as Groove and Shinkuro, aim to develop fully fledged cooperation environments.[47]

The point of all the above is to show how software is being created that has at its aim to enhance various forms of collaboration. We are only at the beginning of a process whereby participation becomes embedded in most of the new software, a move away from the individual bias which was originally at the basis of personal computing.

Howard Rheingold and others, in an excellent overview of Technologies of Cooperation, has outlined seven dimensions of such cooperative ventures.

  1. The structures are dynamic and evolving, not static.
  2. The rules are not imposed by any outside authority, but emerge from the group itself
  3. The resources are made available to the public, not kept private or available through sales
  4. Thresholds are kept as low as possible, so that anyone can participate
  5. Feedback becomes systemic, through the use of social accounting software and other forms of 'participation capture'
  6. Memory is becoming persistent, and no longer ephemeral as it was in the first phase of everchanging URL's
  7. Identity is derived from the group and participation in the group

Howard Rheingold has also distilled seven recommendations to anyone thinking of launching technology-enabled cooperative ventures:

  1. shift from designing systems to providing platforms: the system must allow emergent structures decided upon by the participants
  2. engage the community in designing the rules: the protocol must be democratically arrived at
  3. learn how to tap invisible resources
  4. track thresholds and phase changes: this is important as online communities evolve through various phases that have different rules and success factors
  5. foster diverse feedback loops
  6. convert present knowledge into deep memory: through archiving, persistent addressing, version control and archiving, contributions are never discarded but remain available
  7. support participatory identities, through keeping track of contributions so that this process acts as a recognition for the participants.

It is important to envisage the availability of such an ecology of cooperative tools as enabling autonomous cooperation and peer production, and not just as an auxiliary to the corporate world. In our overview of the emergence of P2P in the economic sphere we will see that this is not a pious wish.

2.2. Explaining the Emergence of P2P technology

Why this emergence? The short answer is: P2P is a consequence of abundance (in fact it is both cause and consequence). With the advent of the ‘Information Age’ that started with mass media and unintegrated private networks for multinationals, but especially with the advent of the internet and the web itself, which allow for digital copying and distribution of any digital creation at marginal cost, information abundance is created. For business processes, the keyword becomes ‘flow’, and the integration of these endless flows. Production of material goods is predicated on the management of immaterial flows. In such a context, centralized systems almost inevitably create bottlenecks holding up the flow. In a P2P system, any node can contact any other node, without passing through such bottlenecks. Hierarchy only works with scarcity, and in a situation where the control of scarce resources determines the end result of the zero-sum power games being conducted. In a situation of abundance, centralized nodes cannot possible cope.[48] From the engineering standpoint therefore, P2P is an appropriate solution to distribute workloads among a large number of loads, a solution which is effective in many case, but not always. Information (I probably do not need to remind the reader of this) is different from material goods, in that its sharing does not diminish its value, but on the contrary augments it. Conclusion: P2P is 'deblocking'.

Second, P2P systems are predicated on redundancy, several resources are always available to conduct any process. This makes them a lot less vulnerable than centralized systems to any kind of disruption: P2P systems are extraordinarily robust. One cannot, in terms of resources, compare any centralized system, to the extraordinary combination of millions of peripheral systems with the billions and trillions of unused memory, computing cycles, etc…. These are only unlocked in a P2P system.

Abundance is again both a cause and a consequence of complexity. In a situation of a multiplication of flows, flows that no longer follow predetermined routes, it cannot possible be predicted, where the ‘solution’ for any problem lies. Expertise comes out of a precise combination of experience, which is unpredictable in advance. Thus, systems are needed that allow expertise to unexpectedly announce itself, when it learns that it is needed. This is precisely what P2P systems allow to an unprecendented degree. Conclusion: P2P is 'enabling'.

There is also a 'democratic rationale' to the above enabling of resources. Since it is a bottom-up rather than a top-down process, P2P is 'empowering'. It reflects the cultural evolution towards an ethos of sharing abundant resources that gain value through their distribution.

2.3.A. Placing P2P in the context of the evolution of technology

Premodern technology was participative, and not as differentiated and autonomous. The instruments of artisans were extensions of their bodies, with which they ‘cooperated’. The social lifeworld was not yet as differentiated into different spheres or into subject/object distinctions, since they saw themselves, not as much as separate and autonomous individuals, but much more as parts of a whole, following the dictates of the whole (holism), moving in a world dominated by spirits, the spirits of men (the ancestors), of the natural world, and of the objects they used. (Dumont, 1981).

Modern technology could be said to be differentiated (division of labour, differentiation of social fields, relative autonomy of technological evolution), but is no longer participative. The subject-object dichotomy means that nature becomes a resource to be used (objects used by subjects). But the object, the technological instrument, also becomes autonomous, and in the factory system typical of modernity, a dramatic reversal takes place: it is the human who becomes a ‘dumb’ extension of the machine. The intelligence is not so much located in the machine, but in the organization of the production, of which both humans and machines are mere cogs. Modern machines are not by themselves intelligent, and are organized in hierarchical frameworks. Modern humans think of themselves as autonomous agents using objects, but become themselves objects of the systems of their own creation. This is the drama of modernity, the key to its alienation.

In post-modernity, machines become intelligent (though not in the same way as humans, they can only use the intelligence put in them by the humans, and so far lack the creative innovation, problem-solving and decision-making capabilities). While the old paradigm of humans as objects in a system certainly persists, a new paradigm is being born. The intelligent machines become computers, extensions now of the human brain and nervous system (instead of being extensions of the external limbs and internal functions of the body in the industrial system). Humans again start cooperating with the computers, seen as extensions of their selves, their memories, their logical processes, but also – and this is crucial – it enables affective communication amongst a much wider global community of humans. Of course, within the context of cognitive capitalism (defined as the third phase of capitalism where immaterial processes are more important than the material production; where information ‘as property’ becomes the key asset), all this still operates in a wider context of exploitation and domination, but the potential is there for a new model which allies both differentiation (the autonomous individual retains his freedom and prerogatives), and participation. Within the information paradigm, the world of matter (nanotech), life (biotech) and mind (AI) are reduced to their informational basis, which can be manipulated, and this opens up nightmarish possibilities of the extension of the resource-manipulation paradigm, now involving our very own bodies and psyches. However, because of the equally important paradigm of participation, the possibility arises of a totally new, subjective-objective, cooperative way of looking at this, and this is an element of hope.

According to the reworking of Foucault's insights by Deleuze and Guattari, there is a clear connection between the type of society and the type of technology that is dominant. Simple mechanistical machines were dominant in the classical period of modernity, the period of sovereignity (18th C.); thermodynamic systems became dominant in the 19th C, inaugurating disciplinary societies; finally, Deleuze dates the advent of control societies to the advent of cybernetic machines and computers. Our sections on the evolution of power will detail this aspect of the evolution of technology.

2.3.B. P2P and Technological Determinism

Starting our description with the emergence of P2P within the field of technology could be misconstrued as saying that P2P is a result of technology, in a ‘technology-deterministic fashion’.

The precise role of technology in human evolution is subject to debate. A first group of positions sees technology as ‘neutral’. Humans want more control over their environment, want to go beyond necessity, and in that quest, built better and better tools. But how we use these tools is up to us. Many inventors of technology and discoverers of scientific truths have argued this way, saying for example that atomic energy can be used for good (energy) or for bad (war), but that is entirely a political decision.

A different set of positions argues that on the contrary, technological development has a logic of its own, that as a system it goes beyond the intention of any participating individual, and in fact becomes their master. In such a reading, technological evolution is inevitable and has unforeseen consequences. In the pessimistic vision, it’s in fact the ultimate form of alienation. This is so because technology is an expression of just a part of our humanity, instrumental reason, but when embedded in the technological systems and its machines, it then forces us to resemble it, and we indeed follow the logic of machines, losing many parts of our full humanity. Think of the positions of Heidegger, Baudrillard, and Virilio as exemplars of such a type of analysis. Like-minded analysis would point out that though strict Taylorism has disappeared from immaterial-based production, the factory model has in fact spread out throughout society now, forming a kind of ‘Social Taylorism’. Efficiency and productivity thinking has taken over the sphere of intimacy. There has been a dramatic destruction of social knowledge and skill, of autonomous cultures, and this type of knowledge has been ‘appropriated’ by the system of capital, and re-sold to us a commodities. Think of paid-for online dating, as a symptom of the loss of skill in dating, as one example.

Technological determinism can also have a optimistic reading. In this view, for example represented by the progress ideology of the late 19th century, and currently by the technological transhumanists, such as Kurzweil (Kurzweil, 2000), technology represents an increasing mastery and control over nature, a means of going beyond the limitations set to us by nature, and, for this type of interpretation, that is an entirely good thing.

The position I personally feel the closest to is the ‘critical philosophy of technology’[49] developed by Andrew Feenberg (Feenberg, 1991, 1999). In his analysis, technological artifacts are a social construction, reflecting the various social interests: those of capital, those of the engineering community conceiving it, but also, those of the critical voices within that community, and of the ‘consumers’ subverting the original aims of technology for entirely unforeseen usages. Feenberg comes very close to recognizing the new form of power that we discuss in chapter 4: i.e. the protocollary power (Galloway, 2004) which concerns the ‘code’. The very form of the code, whether it is for the hardware or the software, reflects what usages can be made of technology.

It is in this sense that I see a first important relationship between the emergence of P2P and its technological manifestations. The engineers who conceived the point-to-point internet already had a wholly new set of conceptions which they integrated in their design. It was in fact explicitly designed to enable peer-based scientific collaboration. Thus, the emergence of peer to peer as a phenomenon spanning the whole social field is not ‘caused’ by technology; it is rather the opposite, the technology reflects a new way of being and feeling, which we will discuss in section 6A in particular. This position is a version of that put forward by Cornelis Castoriadis in his "L'Institution Imaginaire de la Societe". Society is not just a physical arrangement, or a rational-functional arrangement, but everything is experienced symbolically and reflects a meaning that cannot be reduced to the real or the rational. It is the product of a 'radical social imaginary'. And this imaginary though rooted in the past (through the symbolic meaning of institutions), is nevertheless a constitutive creation of mankind. Technology is just such a creation, a dimension of instituted society, that cannot be divorced from the other elements.[50] In this context, peer to peer is the product of a newly arising radical social imaginary. Nevertheless, this does not mean that technology is not an important factor.

Why is that? In a certain sense, peer to peer, understood as a form of participation in the commons, i.e. as communal shareholding, which we discuss in section 3.4.C, has ‘always existed’ as a particular relational dynamic. It was especially strong in the more egalitarian tribal era, with its very limited division of labour, before the advent of property and class division. But it was always limited to small bands. After the tribal era, as we enter the long era of class-based civilization, forms of communal shareholding and egalitarian participation have survived, but always subvervient, first to the authority structures of feudalism and similar ‘land-based systems’, then to the ‘market pricing’ system of capitalism. But the situation is now different, because the development of P2P technology is an extraordinary vector for its generalization as a social practice, beyond the limitations of time and space, i.e. geographically bounded small bands. What we now have for the first time is a densely interconnected network of affinity-based P2P networks. Thus, the technological format that is now becoming dominant is an essential part of a new feedback loop, which strengthens the emergence of P2P to a degree not seen since the demise of tribal civilization. It is in this particular way that the current forms of P2P are a historical novelty, and not simply a repeat of the tolerated forms of egalitarian participation in essentially hierarchical and authoritarian social orders.

To repeat: it is not the technology that causes P2P. Rather, as technology, it is itself an expression of a deep shift in the epistemology and ontology occurring in our culture. But nevertheless, this technology, once created, becomes an extraordinary amplifier of the existing shift. It allows a originally minoritarian cultural shift to eventually affect larger and larger numbers of people. Finally, that shift in our culture, is itself a function of the emergence of a field of abundance, the informational field, which is itself strongly related to the technological base that has helped its creation.

To explain this argument, let us formulate this question of ‘why now?’, in a slightly different manner. Technology philosophers such as Marshall McLuhan (McLuhan, 1994) and others, have pointed out that technology is an ‘extension of our bodies’, or more precisely of the faculties of our bodies and minds. In a simplified way: tribal-era technologies, such as spears and arrows, reflect the extremities of our limbs, the nails and fingers. Agricultural era technologies reflect the extension of our muscular system and the limbs proper: arms and legs. Industrial era technologies reflect our central body and its internal metabolic functions: the transformation of raw materials into more refined products that can be used by our body. Industrial economies are about producing, distributing and consuming physical products. But the information economy era is characterized by the externalization of our nervous system (telephone and telegraph) and our minds (computers), with a logic of first one-to-one communication technologies, then many to one (mass media), and finally with the internet and computer networks: many to many.

If we look at history in such a broad and large way, we can see P2P principles operating in the small bands of the tribal era. But as soon as society complexified itself through more and more elaborate division of labour, such was the complexity of organisation society, that it seemed to make more sense to create centralized institutions. According to system theorists, ‘fixed arrangements dramatically reduce transaction costs’. In a Darwinian sense, one could say that they could better manage information scarcity, so that a lesser number of players could rationalize the organisation of such complexity, through hierarchical formal rules. After the revolution of print, followed by the invention of electronic communication, and a dramatic lessening of information scarcity, we see a further integration of a more differentiated world system, and the emergence of a market, though within that market, it still made more sense to have larger and larger monopolistic players. With the advent of worldwide communication networks through, and before the internet these were a monopoly of the large companies, we see the occurrence of major changes in organizational logic: a flattening of hierarchies. According to system theorists, complex systems cannot themselves control their increasing number of ever-more efficient subunits, unless by granting them ever more increasing functional autonomy. The larger system controls whether a subunit has carried a task, but no longer how it is carried out. Thus his law of ‘requisite hierarchy’ which states that the need for hierarchy diminishes in so far as the subunits increase their own capacity for control. And the 'law of requisite variety' of Arvid Aulin,[51] which states that where internal controls or external regulation is absent, hierarchy is needed. Thus one of the keys to understand current processes is that communication technologies have enabled this kind of control and regulation to such a degree, as shown in P2P processes, that centralized command and control can in fact be overcome to a very great extent. Or more correctly, that the subunits become primary, down to the level of individual participants, who can now voluntarily defer to the subunit for minimal control of ‘what is produced’ (and no longer ‘how it is produced’), while the subunits to the same vis a vis the overall system. Within corporations P2P processes can only partially thrive, because they have to protect the profit motive, but outside the corporation, this limit can be overcome, and those processes of ‘production going outside the boundaries of the corporation’ are increasingly showing that the profit imperative, and the private appropriation of the social-cooperative processes, is becoming counter-productive. In much simpler terms, let us then conclude that the development of information-processing capabilities has liberated cooperation from the constraints of time and space. Thus, while accepting the argument that P2P processes have always existed, but confined to small bands (or, it eventually emerged for very short periods in revolutionary situations only to be defeated by their then still more efficient authoritarian and centralized enemies), it is indeed ‘only now’, that such massive emergence of P2P is possible. We must thus inevitably conclude that technology IS a very important factor in this generalized emergence.

More Information


  1. I.e. the 'current economic system'. See section 3.1.B for a discussion of the concept of 'cognitive capitalism'. In general, we use this term for the current form of 'informational capitalism', i.e. a form of capitalism where the immaterial processes are of more importance than the material.

  2. Salvino Salvaggio, personal communication on hierarchy in FLOSS projects:

    "D'abord et avant tout, il n'est pas entierement correct de soutenir que dans les initiatives P2P, les differents participants sont "equipotents". Il suffit d'aller relire, par exemple, les archives et la documentation non-technique de la plupart des projets pour constater que certaines personnes y jouent un role de coordination et qu'elles definissent les modalites de collaboration des autres intervenants. De la meme maniere, certaines personnes dans les initiatives P2P ont une vision globale du projet alors que d'autres sont uniquement chargees de realiser des petits morceaux fonctionnels. La principale difference par rapport au capitalisme traditionnel, c'est que dans le P2P, la segmentation des niveaux de "pouvoir" des uns et des autres est librement consentie, acceptee comme configuration des rapports visant l'optimisation de l'efficacite fonctionnelle. En tant que telle, toute configuration des rapports entre participants au projet peut etre ouvertement mise en discussion a chaque instant. Il ne s'agit pas d'une logique normative imposee et contre laquelle seule la voie du recours serait ouverte aux avis divergeants. Au contraire, la remise en cause par la discussion des pairs est inscrite au sein meme des processus d'auto-organisation. Decoule de ce premier aspect qu'il est excessif de dire que dans les projets P2P il ny a pas de hierarchie. Elle existe bel et bien mais est respectee la plupart du temps car librement consentie et discutee. J'en veux pour preuve que le projet Linux a ete coordonnepar une sorte d'instance directrice qui integre les changements et prend soin a maintenir la coherence du projet en evitant que les contributeurs ne fassent n'importe quoi.On pourrait dire que dans les 2 cas il s'agit de pouvoir ou de hierarchie sans coercition car ceux qui ne sont pas d'accord ne sont pas "punis", ils peuvent facilement circuler : entrer ou sortir du projet constitue un droit que nul ne conteste aux membres."

    The practicalities of Equipotential selection are explored, at http://www.vecam.org/article.php3?id_article=346 . Here the process is investigated amongst young chatters when they move from a closed environment to an open public environment, and are subject to two processes: 1) la prise en charge; 2) la mise a l’epreuve

  3. See http://www.theyrule.net/ for examples.

  4. Such as the virtual gaming marketplaces studied by Dr. Castronova at Indiana University. See http://www.itconversations.com/shows/detail377.html

  5. Summary of Internet Governance bodies by ACM's Ubiquity magazine

    Certain protocols, and the parameters required for their usage, are essential in order to operate on the Internet. A number of bodies have become responsible for those protocol standards and parameters. It can be fairly said that those bodies steer the Internet in a significant sense. This document is a summary of those bodies and their most important characteristics.

    Almost all Internet technological standards are developed and set by the group consisting of the Internet Society (ISOC) and the units operating under the auspices of ISOC: the Internet Architecture Board (IAB), the Internet Engineering Steering Group (IESG), the Internet Engineering Task Force (IETF), the Internet Research Steering Group (IRSG), the Internet Research Task Force (IRTF), and the RFC Editor. It is important to note that, while these units are responsible to ISOC, ISOC allows them a large degree of independence in their technical work.

    Internet domain names and IP addresses are the province of the Internet Corporation for Assigned Names and Numbers (ICANN) and its Internet Assigned Numbers Authority (IANA).

    World Wide Web standards are developed by the World Wide Web Consortium (W3C).

    It should be noted that the direction of the Internet's physical network structure is not addressed in this document. That structure is essentially determined by a large number of mainly commercial network operators, ranging from small to intercontinental, that build and join their infrastructures in response to market forces, in order to provide them to subscribers on a paid basis. These networks that form the Internet are linked in a topology similar to that of a large, well-developed highway system.

  6. P2P Weblog monitors filesharing developments, including its political and economic aspects, at http://www.p2p-weblog.com/

  7. the struggle for free access to free culture through the sharing of files:

    "From the second generation on, you had distributed servers. You could run your own server and tie them into others. Searches took longer, were less accurate and there was no guarantee you would be searching a single other machine, much less the entire network. It was however, unstoppable. For every node you took the time and money to blast out of existence, there were several thousand others springing up. Clearly, the old tactics would not work. To make matters worse, these new networks were aware of the tactics being used against them, and actively tried to nullify them. As the network programmers were adding features, they were also adding security, both for them, and for their users. Things started out simple, like support for file types other than MP3, and quickly became more sophisticated. Military grade encryption? No problem. Licence restrictions that beat the pigopolists with the very sticks they created? Sure, pick any of five. Random user names, obfuscated IP addresses, changing ports and just about everything else you could think of has been done by now.

    The real stake in the heart of the RIAA and friends came with the complete removal of servers, in a true peer to peer sense. Instead of having many little servers, you had every node doing dual purpose client and server jobs. Searches were completely decentralised, and the RIAA was finished, period. The recent string of stinging court losses for the Greediest Monopoly on Earth in the US courts assured any chance the RIAA had was gone. Its worst nightmare was confirmed, as everyone else just knew, the services were completely legal. The Grokster decision affirmed the right of the companies to provide the services they always have, and to do so with impunity. People using it may be guilty of crimes, but the services themselves are not illegal. In the old days, there was one provider, and one repository, one throat to strangle. It was manageable technically if it came down to a technical solution. Instead of allowing that technical solution to blossom, they went the legal route, and lost. In the intervening years, the tech went around them, and they sat still, and possibly regressed.

    The problem with forced evolution is that it tends to work. The RIAA made the networks evolve technically, from a relatively incocous MP3 network to the file sharing network from hell. There is nothing you can't get anymore, and there is no one to stop it. If they came up with a tool, unlikely as that may be, there is no place to implement it.”
    (http://www.theinquirer.net/?article=18206 )

    Some online music resources:

    An article explaining how to find legal online music, at : http://www.nytimes.com/2004/09/10/a...sic/10INTE.html ; Grouper is a software tool that lets you share music amongst friends only, to ensure the fair use principle, at http://www.grouper.com/ ; user-enriched evaluations of filesharing programs at http://www.slyck.com/programs.php

  8. the ascent of a third generation of peer-to-peer networking technology

    "Each successive generation has decentralized more functions, making the networks harder to shut down and helping to expand the power of searches. The first first generation of file-swapping services, led by Napster, were built around big centralized indexes that would keep track of what was available everywhere on the network. These would serve as matchmakers, linking a person searching for a file with the computer where it was stored. That was efficient, allowing access to a huge range of material—but it also proved to be illegal. Courts said that Napster was responsible for a network where a vast amount of copyright infringement was happening and ultimately shut the company down. The second generation of decentralized services, led by Gnutella and the FastTrack technology underlying Kazaa, soon emerged to take its place. Neither of these had central servers. They relied instead on passing search requests from computer to computer until a file was found, and then passed that information back to the original searcher. That technology proved initially unwieldy, as millions of search requests passed through every computer on the network, creating traffic jams at low-bandwidth bottleneck points. That improved over time as programmers figured out ways to hand off these search requests more efficiently, but usually resulted in searches that included only part of a network—say 100,000 people instead of 2 million. A U.S Appeals Court recently ruled that this kind of decentralized network was legal, unlike Napster, in part because the software distributors did not have direct control over what was happening on the networks. "The (record labels and movie studios) urge a re-examination of the law in the light of what they believe to be proper public policy," the court wrote in that decision. "Doubtless, taking that step would satisfy the copyright owners' immediate economic aims. However, it would also alter general copyright law in profound ways with unknown ultimate consequences outside the present context." The third generation of networks, represented by eDonkey and now Morpheus, as well as a host of smaller independent developers, makes the tools even more decentralized than before. Distributed hash tables are essentially a way of taking a snapshot of where every file on the network is at a given moment and scattering bits of that information around the entire network. To find a given file, a search request goes first to any computer on the network. That computer will point to a different computer that has a little more information on how to find the file. The third computer might have information on the file itself—or it might take a few more hops to find the computer with the right information. The process is analogous to asking a succession of increasingly informed tour guides for directions, rather than accosting random people on the street. The information about the network in each place is constantly being updated as new files or computers are added."

  9. Consciously working for a participatory culture: Interview of Nicholas Reville of Downhill Battle

    The following quote shows that developers of filesharing programs are aware of the social and political import of their work. See the previous quotes on how the whole development of filesharing is driven by a political and social struggle. It's not technology causing change (technological determinism), it is technology in turn determined by the dynamics of struggle.

    Question by Greplaw editors: Is there anything about Bit Torrent that helps foster a participatory culture?

    Reply: “It can definitely be a part of big step forward. 'Participatory culture' is how we've started thinking about the intersection of all these phenomenons like blogs, filesharing networks, wikis, and just the web in general. They all make it easier for people to create and distribute art/ideas and also let people act as filters and editors. But we're really at the very, very beginning of all this. The shift that we're going to see from the current top-down culture model will be absolutely revolutionary. As overused as that term is, there's really no other word that captures the magnitude of what's going on here.

    As for BitTorrent specifically, searching for content on napster-style search and download clients really sucks and, on its own, creates a huge bias towards corporate content that people already know about. On the other hand, websites and blogs organize and present content so that you can discover things you didn't even know you were looking for. Since BitTorrent uses web-based links, it has the potential to fit very well with blogs and content management systems while making it possible for anyone to offer very large files without worrying about bandwidth.”

    DownHill Battle, at http://www.downhillbattle.org/ , is “a non-profit organization working to end the major label monopoly and build a better, fairer music industry”

    Grey Tuesday as an example of online music activism in action, at http://www.firstmonday.org/issues/issue9_10/howard/index.html

  10. Writeable Web tools are reviewed at http://www.oreillynet.com/pub/t/84

    A directory of open source content management systems at http://www.la-grange.net/cms

  11. Fortune magazine on the growing importance and effects of blogging for the business community, at http://www.fortune.com/fortune/technology/articles/0,15114,1011763,00.html

    Amongst the recommended do-it-yourself blogging programs are http://movabletype.org/ and https://www.typepad.com/

  12. Podcasting described, by the Washington Post

    "The word 'podcasting' is a mash-up, a contraction of broadcasting and iPod, the popular music player from Apple Computer. The big idea is to let people save Internet audio so they can listen whenever they want from a computer or handheld device. Receiving software lets people pick podcasts from online directories, clicking a button to tell their computers to find and download new versions of those selected programs. Files automatically get copied to iPods."

    Amongst the directories to find podcasting programs: http://audio.weblogs.com/ ;

  13. Mobcasting

    MotorFM allows MP3 downloads and songs streamed directly to mobile phones, at http://www.wired.com/news/print/0,1294,66597,00.html

  14. Skype, using P2P filesharing principles for telephony:

    Zennström and Friis, the creators of KaZaa, one of the early and popular P2P filesharing systems, came up with the idea of using P2P to enable free phone calls on the internet, and Skype was born, poised for an extraordinary rapid update. Beyond phone calls, users have been creatively tinkering with it to enable audio and video broadcasts (i.e. Skypecasting). Excerpts from an interview in Business Week:

    "Q: Where else could this go, beyond files and people?

    A: It could be other resources – you know, storage, video streams. But this really works on two levels. First there's the peer network, and I've been stressing that because it's the enabler for everything. But then there are the applications. We could not have foreseen – wow! – all the things that could be developed on top of P2P. For instance, when we first used peer-to-peer technology, we didn't foresee that we could do voice. It became obvious to us after some time, but when we started we didn't know what the applications would be. But when we applied the technology to various industries, we realized we could create a sustainable competitive advantage. That's because, in the normal system you have a marginal cost for every unit you add. If your network is client/server-based, you have to add a new network card for each new Web server, central switch, and so on. But in a peer-to-peer network, you're reusing the system resources in the network, so the marginal cost of producing a phone call or a file transfer or something else is zero. "

    An article explaining the rapid diffusion of Skype, at http://www.nytimes.com/2004/09/05/business/yourmoney/05tech.html?th

  15. Following the suggestion of http://blog.commonbits.org/2005/06/be_the_media_th.html?, which offers an overview of webcasting developments (June 2005)

  16. On Vlogging: http://www.seriousmagic.com/products/vlogit/ ; http://www.vlog.com/

  17. How Bittorrent works

    "Let's say you want to download a copy of this week's episode of Desperate Housewives. Rather than downloading the actual digital file that contains the show, instead you would download a small file called a "torrent" onto your computer. When you open that file on your computer, BitTorrent searches for other users that have downloaded the same "torrent." BitTorrent's "file-swarming" software breaks the original digital file into fragments, then shares those fragments between all the users that have downloaded the "torrent." Then the software stitches together those fragments into a single file that a user can view on their PC. Sites like Slovenia-based Suprnova offer up thousands of different torrents without storing the shows themselves. Meanwhile, BitTorrent is rapidly emerging as the preferred means of distributing large amounts of legitimate content such as versions of the free computer operating system Linux."

    A profile of Bram Cohen, designer of Bittorrent, in Wired at http://www.wired.com/wired/archive/13.01/bittorrent.html

    Sources for Bittorrent downloads:

    Note that they may disappear due to legal action.

    • Legal Torrents, which includes a wide selection of electronic music. It also has the Wired magazine Creative Commons CD, which has songs from artists like the Beastie Boys who agreed to release some of their songs under a more permissive copyright that allows free distribution and remixing.
    • Torrentocracy has videos of the U.S. presidential debates and other political materials.
    • File Soup offers open-source software and freeware, music from artists whose labels don't belong to the Recording Industry Association of America trade group, and programs from public television stations like PBS or the BBC.
    • Etree is for devotees of "trade-friendly" bands like Phish and the Dead, who encourage fans to share live recordings, usually in the form of large files that have been minimally compressed to maintain sound quality." (http://www.wired.com/news/digiwood/0,1412,65625,00.html?)

    Blog Torrent, an improvement of BitTorrent specially designed for TV-like channels

    (later renamed the Broadcast Machine):
    "Blog Torrent adds features to BitTorrent that make it much easier for people to ‘publish’ files. We’ve made a simple, web-based way to create a ‘torrent’ and upload it in a one step. We’ve also made it easier to install a ‘tracker’ which is necessary on the server side to connect everyone who’s sharing the files. This makes it much easier for video artists, documentarians, or anyone with a camcorder and iMovie, to share their video content on a blog or website. To this point, BitTorrent has been complicated enough that it hasn’t been adopted by artists, which means that most of the content people are sharing is being posted by people who didn’t make it themselves, mostly Hollywood movies and TV shows. But what’s exciting about peer-to-peer is that it’s a free distribution method for people who could never afford distribution. With Blog Torrent, anyone can share what they make and that means totally new alternatives to mainstream media, in this case, television. We ultimately want to see internet “TV Channels” that download video in the background and let you watch at your convenience (a TiVo for the internet).

    I know we’re probably talking intuition rather than hard data here, but what is your sense of the potential audience for Blog Torrent (I mean content creators), and why? Is there any particular experience you’ve had which made you think “We have to do this and it is going to be huge.”

    I think the audience is very, very broad and varied. I have friends, for example, that make artistically serious video work but have never considered offering it online, because it was never practical for them. I hope Blog Torrent will let them jump in. I also expect documentary filmmakers will love this technology– they can make a name for themselves if they’re new, or they can share extra footage and full-length interviews, they can offer old content that they aren’t selling anymore, and I bet they’ll even start to share first-run material for everyone who doesn’t live near an independent cinema. People who make videos and movies always want people to see it and there’s hundreds or thousands of times more content being created than gets out through mainstream channels. Not only that, but the number of content producers is set to explode: video has finally become practical on the desktop and small, hard-drive camcorders are right around the corner. We called it “Blog Torrent” – forgoing our original, and much cooler name “Battle Torrent” – because it makes sharing video as easy as blogging text or photos and, in doing so, might be able to do in the video world what blogs have done in the news world (or more). And whether it’s our software or someone else’s, I think TV is about to face more serious competition than they would ever imagine. There are too many talented people out there that have no space on the dial. And access to television channels is much narrower in terms of access than music, books, newspapers, or magazines– that means new pressures on the system could be even greater when things open up."

    Good French-language summary of P2P TV, in particular the distribution of TV series through Blog Torrent, at http://www.futura-sciences.com/sinformer/n/news5076.php

  18. Exeem

    "Tom Mennecke, news editor of the popular file sharing news site Slyck, claimed on 1 December (2004) that: "EXeem will marry the best features of a decentralised network, the easy searchability of an indexing server and the swarming powers of the BitTorrent network into one program." He told New Scientist: "Decentralising BitTorrent holds the potential to revolutionise the P2P community." Screenshots posted on another site by a self-proclaimed eXeem beta tester show a client that incorporates a search function and the ability to monitor downloading files. Theodore Hong, a P2P programmer in the UK, says that whether eXeem materialises or not, someone will find a way to decentralise BitTorrent searching and tracking. "Something like it is bound to come eventually," Hong told New Scientist. "It will be a big problem for the major media companies because they will have to confront the underlying fact that millions of people want to share files."
    (New Scientist, http://www.newscientist.com/article.ns?id=dn6830)

  19. Internet traffic geared to audiovisual content

    Researchers singled out peer-to-peer file trading as the single fastest-growing consumer of network capacity. Currently, Mauldin said, the amount of traffic from peer-to-peer trading rivals that generated by regular web surfing. Growing demand for data-rich files, such as movies, is further boosting bandwidth consumption. "From mid-2004, we saw a significant shift away from music and on to video," said Andrew Parker, chief technical officer at CacheLogic, a firm based in England that monitors global peer-to-peer traffic. "Before that it was mainly music." While P2P activity accounts for the lion's share of rising bandwidth consumption, internet traffic analysts said the growing popularity of voice over internet protocol, or VOIP, is a factor, too.
    (http://www.wired.com/news/print/0,1294,67202,00.html )

    Cory Doctorow: On Chris Anderson's Long Tail blog, some stats on the meltdown of mainstream media:
    • Music: sales last year were down 21% from their peak in 1999
    • Television: network TV's audience share has fallen by a third since 1985
    • Radio: listenership is at a 27-year low
    • Newspapers: circulation peaked in 1987, and the decline is accelerating
    • Magazines: total circulation peaked in 2000 and is now back to 1994 levels (but a few premier titles are bucking the trend!)
    • Books: sales growth is lagging the economy as whole

    He follows up with the fact that movies, videogames, and the Web are all growing."

  20. Tools and services that enable Webcasting

    The better known civil society initiatives are Common Bits (http://www.commonbits.org ) and the Broadcast Machine (http://www.particpatoryculture.org/bm ). They are associated with sites that enable sharing of such material through online communities, such as Common Tunes (http://www.commontunes.org ) for music and CommonFlix (http://www.commonflix.org ) for videos. Vimeo (http://www.vimeo.com/ ) allows users to share small clips.

    Many new sites are also acting as repositories such as Our Media (http://www.ourmedia.org ) and the Archive (http://www.archive.org ). One World TV is at http://tv.oneworld.net . Alternative TV stations are build around such open source content. For example UK Nova (http://www.uknova.com ) is webcasting BBC programs which have been put in the public domain. Movies for the masses is a peer to peer financing scheme for producing movies and videos, at http://www.moviesforthemasses.ibiny.com/ . Search engines have been developed to identify this kind of content, see http://video.google.com/ and http://www.omn.org/

    In the corporate world, examples are Audiolink (http://www.audiolink.com/home.html ) and ODEO (http://www.odeo.com ) which assist users in their broadcasting efforts. Prodigem (http://www.prodigem.com ) allows any audiovisual creator to sell their content. Current TV (http://www.current.tv) is a similar attempt to commercialise citizen webcasting.

    Companies are building software that allows users to manage time-shifted radio and television, as well as self-created content into their playing devices such as iPods. Griffin Technologies recently announced iFill., while El Gato's (http://www.elgato.com ) EveHome software enables viewers to watch internet-downloaded content on their TV.

    Most of the above material was reviewed at http://blog.commonbits.org/2005/06/be_the_media_th.html?

  21. Customer-build network infrastructures, by Clay Shirky

    “According to Metcalfe's Law, the value of an internet connection rises with the number of users on the network. However, the phone companies do not get to raise their prices in return for that increase in value. This is a matter of considerable frustration to them. The economic logic of the market suggests that capital should be invested by whoever captures the value of the investment. The telephone companies are using that argument to suggest that they should either be given monopoly pricing power over the last mile, or that they should be allowed to vertically integrate content with conduit. Either strategy would allow them to raise prices by locking out the competition, thus restoring their coercive power over the customer and helping them extract new revenues from their internet subscribers. However, a second possibility has appeared. If the economics of internet connectivity lets the user rather than the network operator capture the residual value of the network, the economics likewise suggest that the user should be the builder and owner of the network infrastructure. The creation of the fax network was the first time this happened, but it won't be the last. WiFi hubs and VoIP adapters allow the users to build out the edges of the network without needing to ask the phone companies for either help or permission. Thanks to the move from analog to digital networks, the telephone companies' most significant competition is now their customers, because if the customer can buy a simple device that makes wireless connectivity or IP phone calls possible, then anything the phone companies offer by way of competition is nothing more than the latest version of ZapMail.“

  22. Definition of Collaborative Citizen Journalism

    "It's called collaborative citizen journalism (CCJ), where ordinary citizens band together on the Web to write original stories and critique mainstream media stories, using the Internet to connect with each other and to make sure their thoughts reach the public. This new form of journalism differs from its more popular blogging cousin in that, unlike blogging, which eschews (in many cases) the more rigorous elements of journalism, collaborative media efforts tap into a particular community to make sure a story is as complete as possible."
    (http://technologyreview.com/articles/05/05/wo/wo_052005hellweg.asp )

  23. The concept of a self-informing public is mentioned in http://journalism.nyu.edu/pubzone/weblogs/pressthink/2004/12/28/tptn04_opsc.html

    Backfence is based on the concept that local news was just a neighbour's fence away, and is now possible again on a global scale, see http://www.backfence.com/what_we_re_doing.html

  24. An example of a video-broadcasting experiment, by IndyMedia, an internet-based independent media network, related to the alterglobalisation movement

    "GENEVA03 is a temporary broadcasting studio during the g8-summit transmitting video and audio streams live from the cultural center l'usine in geneva from may 29 to june 3. The livecast will be streamed on the internet and picked up and redistributed by local and international broadcasters as well as projected in the streets and theatres of Geneva. In order to cover the protests between Geneva, Lausanne and Anmasse in real time, media activists will work from the "everyone-is-an-expert" mobile studio van, which – with a self-adjusting bi-directional satellite dish – will provide a mobile internet connection and transmit live-footage from the roaming protests. The GENEVA03 project is a joint effort of a growing number of video activists and independent filmmakers together with dozens of indymedia reporters, to organize and broadcast independent news coverage from the G8 events. We are currently programming a stream, that, besides the live coverage of the mass-protests, will include movies, concerts, talk-shows, vj sessions, subvertisements and other radically innovative formats."

    Towards a worldwide video syndicate:

    "A Call to Join and Contribute to the Establishment of a Video-Sharing Syndicate/Network

    Project Description: For some time now the idea of utilising peer2peer structures to assemble a user-built distribution platform has been circulating. Recently, in the run-up to the G8 meeting in Evian, a concrete proposal has been made to establish a system for the sharing of video. Long-term we believe that we can assemble a sustainable and scalable platform for audio-visual materials of a critical and independent nature. This is an appeal to groups/individuals to get involved, dedicate some resources, support and expand the project generally. Works to be distributed over the system will vary from somewhat edited footage suitable for use as a stock archive to finished documentaries/films. Each file will be accompanied by metadata in an xml .info file and produced as an searchable RSS feed for people to integrate into their own sites and published on its own website (where there will also be a manifesto, how-to's. contact info for participating groups etc.) Amongst the metadata fields will be a specification for the nature of the license under which the materials may be used (e.g. Creative Commons share-alike)"

  25. Citizen-based journalism initiatives are not just citizen blogs, but rather more sophisiticated attempts to create an alternative form of journalism. There are 3 main types: local news ventures, based on local communities, such as backfence.com; broadly-focused sites such as OhMyNews; and collaborative vetting services where groups of people check articles from the mainstream press.


    "OhmyNews is a kind of 'fantastic mix' of the citizen reporters and professional reporters," Oh told the audience. "It has 35,000 citizen reporters and 40 staff reporters whose reporting style is very similar to professional journalists. So they are in charge of the straight news and investigations."

    Similar initiatives are WikiNews, which is based on a collective 'vetting' of news articles, at http://en.wikinews.org/wiki/Main_Page ; also see News Trust as another vetting cooperative, at http://en.wikinews.org/wiki/Main_Page ; Indymedia: http://indymedia.org ; Take Back the News, http://www.takebackthenews.com/

  26. Book on citizen-based journalism:

    We the Media: Grassroots Journalism By the People For the People by Dan Gillmor 299pp, O'Reilly

    From a review:

    "He tells us of OhMyNews.com in South Korea, which has 15,000 "citizen reporters" filing news and comment; and of wikipedia, the online encyclopedia where anyone can write or edit an article, which now has more than one million articles in more than 100 languages. He tells us about bloggers who have bigger audiences than many newspapers, and who have become just as influential as any specialist journalist in their sector. How Russ Kirk of the alternative news site The Memory Hole used the freedom of information act to get photos of dead US soldiers being brought back from Iraq in flag-draped caskets into the public domain; and how bloggers swarmed together to claim the scalp of Trent Lott, the majority leader in the US Senate, after he appeared to wax nostalgic for a racist past at a fellow senator's birthday dinner. Gillmor tells of his own experience as a columnist on the San Jose Mercury, starting to write a blog and dealing with comments and criticisms from his readers, who, he claims, "have made me a better journalist, because they find my mistakes, tell me what I'm missing and help me understand nuances".

  27. Weblogs as a process of mass-amateurisation, not mass-professionalistion, at http://shirky.com/writings/weblogs_publishing.html

  28. Blogs defined as 'the self in conversation', by David Weinberger

    "And it seems to me that one of the reasons why weblogs are being maintained by people who have a handful of readers, as well as by people who have many readers, is that the weblogs are doing something for that person, and for the groups that form around the weblogs. So, for example, a big part of it is that weblogs are a way that we have a voice on the Web. And, in fact, not simply voice, because we had that before. We could have posted a Web page or joined a discussion group or whatever. Weblogs are persistent. That space stays there, and every day or five times a week or whatever it is, you update that page. And people come back to that page, and that page becomes sort of your proxy self on the Web. The promise of the homepage was that we would have a persistent place that would be our Web presence. Well, now we do. And they're called weblogs, so weblogs are self, and they're self in conversation with others. So much of weblogging involves responding to other people or getting comments or linking to other people. So that's a big deal to have now a place that is a Web self that's created by writing and is created in conversation with other people. Of course that's a big deal. It doesn't have much to do with the media."

  29. RSS Feeds

    The Washington Post explains: “RSS lets Web sites publish free "feeds" of their content, which a program called a newsreader collects on a set schedule, displaying new headlines and links for you to read within the newsreader or, with one click, in your Web browser"

    Some sites offer the same functions as an RSS reader, i.e. the possibility to combine various blogs in folders and to monitor them all from the same place, see http://www.bloglines.com/

  30. Self-publishing

    "For the first time, print-on-demand companies are successfully positioning themselves as respectable alternatives to mainstream publishing and erasing the stigma of the old-fashioned vanity press. Some even make a case that they give authors an advantage – from total control over the design, editing and publicity to a bigger share of the profits."
    (http://www.nytimes.com/2005/04/24/books/review/24GLAZERL.html? )

    The article mentions such examples as iUniverse and Booksurge. See also lulu.com.

  31. Nodeb.com

    “On Nodeb.com, people list their open nodes, essentially inviting strangers to join a worldwide community of users. This site has more than 11,000 registered access points in the United States. Even if service providers can make it more difficult for users to share Internet access, techies will eventually find a way around them.”
    (http://www.nytimes.com/2004/03/19/opinion/19CONL.html?th )

    An article about the advances of the "Personal Telco" movement in the U.S., at http://www.csmonitor.com/2005/0615/p01s03-ussc.html ; home page at http://www.personaltelco.net/static/index.html

  32. Wireless Commons in Hawaii

    Here’s a description of what is happening in Hawaii, where a peer to peer wireless network is covering more than 300 square miles:

    "Now people all over the island are tapping into Wiecking's wireless links, surfing the Web at speeds as much as 100 times greater than standard modems permit. High school teachers use the network to leapfrog a plodding state effort to wire schools. Wildlife regulators use it to track poachers. And it's all free. Wiecking has built his network through a coalition of educators, researchers, and nonprofit organizations; with the right equipment and passwords, anyone who wants to tap in can do so, at no charge.”

    The Wireless Commons reading list:

    Additional Reading
    • “Radio Revolution: The Coming Age of Unlicensed Wireless” by Kevin Werbach, published by the New America Foundation. [1]
    • Building Wireless Community Networks. 2001. by Rob Flickenger. O’Reilly.
    • Wired/Unwired: The Urban Geography of Digital Networks. 2003. by Anthony Townsend. Unpublished PhD dissertation. [2]
    (http://www.wirelesscommons.org/ )

  33. Municipal and local wireless networks

  34. Mesh Networks or Ad Hoc Networks for the telecom sector, as described in The Economist:

    "The mesh-networking approach, which is being pursued by several firms, does this in a particularly clever way. First, the neighbourhood is “seeded” by the installation of a “neighbourhood access point” (NAP)—a radio base-station connected to the Internet via a high-speed connection. Homes and offices within range of this NAP install antennas of their own, enabling them to access the Internet at high speed.Then comes the clever part. Each of those homes and offices can also act as a relay for other homes and offices beyond the range of the original NAP. As the mesh grows, each node communicates only with its neighbours, which pass Internet traffic back and forth from the NAP. It is thus possible to cover a large area quickly and cheaply.”

  35. Mark Pesce on building the alternative media network

    Pesce proposal is specifically for a network which could also distribute similar programming, not all nodes doing different things.

    "So how do you turn these little stations into a network? Well, there are two answers to this question. The first is fairly obvious: you put the transmitters close enough together that each station is a paired receiver/transmitter, and in so doing you create a mesh network of transmitters. The receiver picks up the signal and passes it along to the transmitter, which rebroadcasts it on the same frequency. This is somewhat analogous to how mobile networks work – you move from cell to cell and the signal follows you seamlessly – and is very well suited to densely populated urban districts, college campuses, public events, and so forth. The costs for each node in such a system are very low – probably less than fifty dollars for both the AM receiver and the transmitter….) Now it isn't possible to blanket an sparsely populated entire country…. In situations like this, Internet streaming comes to the rescue. Any signal which can be delivered via AM radio can also be delivered via the internet at dial-up speeds. The streaming signal output can put plugged into the AM transmitter, and, once again, you've got your network. In this way you can cover both the densely populated areas and the spaces in between them with one network.Now both of these proposals are more than just idle ideas – they're the heart of a new network – RADIO RHIZOME – which launched in Los Angeles."

  36. Mark Pesce on the internet TV tuner and its disruptive effects on traditional broadcasting:

    “I do believe that it is appropriate to examine the politics of scarcity with respect to television broadcasting, and engineer a solution which effectively routes around the problem (to steal a phrase from John Gilmore), recapitulating the Britannica to Wikipedia process. As media consumers, we need to liberate ourselves from the anti-market forces of the free-to-air commercial networks, and, as creators and purveyors of audiovisual content, we need to free ourselves from the anti-market forces of commercial networks as programme distributors. In other words, we need to develop a comprehensive computational and emergent strategy to disintermediate the distributors of audiovisual media, directly connecting producers to consumers, and further, erasing the hard definition between producer and consumer, so that a producer’s product will only be identifiable by its inherent quality, in the eyes of the viewer, and not by the imprimatur of the distributor.., the pieces are in place for a radical reconfiguration of the technology of programme delivery to the TV viewer. Digital television, thought to be the endpoint of this revolution, was actually only its beginning, and while digital televisions are very useful as display monitors, their broadcast tuners with their sophisticated analog electronics will be completely obsolete once broadband supplants broadcast as the delivery medium. The digital TV is a great output device, but a lousy tuner, because the design of the device reinforces the psychology of spectrum scarcity. What we need, therefore, is a new device, which sits between the Internet, on one hand, and the digital television set, on the other, and acts as a new kind of tuner, thereby enabling a new, disintermediated distribution mechanism. The basic specification for this device is quite simple: it would be capable of locating, downloading and displaying audiovisual content, in any common format, on the viewer’s chosen display device. That display device doesn’t even need to be a digital television – it could be a PC. Or the soon-to-be-released PSP, the PlayStation Portable. Or a 3G cell phone. This intermediary device – the “Internet tuner,” if you will – could be a hardware-based set-top box, or a piece of software running on a more general-purpose computing device – it doesn’t really matter…When the idea for the Internet tuner popped into my head… I presumed that I’d stumbled onto a completely novel idea. InI’ve discovered how wrong I was. Projects like the BBC Internet Media Player, MythTV on LINUX, Media Portal for Xbox and Windows, Video LAN Controller for Mac OS X, Windows and LINUX – the list goes on and on. Just four weeks ago TiVO announced that they’re going to release a software upgrade which will make their PVRs Internet-aware, so that they can locate and download Internet audiovisual content. These ideas are floating around the commercial software community, too, in products like Microsoft IPTV, and SnapStream’s Beyond TV. Many people are working toward the features of the Internet tuner, but none of them – to my knowledge – have brought these pieces together with an emphasis on the emergent qualities of the tuner as a tool for communication…the Internet tuner or something very much like it will do for audiovisual media what the Web did for print – make it immediately accessible from anywhere, at any time, for any reason. Because of the Web, libraries are transforming from repositories of knowledge into centers where people come to be pointed toward online repositories. The library is evolving into a physically constituted Google.”
    (http://www.disinfo.com/site/displayarticle4565.html; see also http://www.hyperreal.org/~mpesce/fbm.html)

  37. Voice over Wi-Fi

    “Today people take laptops to wireless hot spots in coffee bars and airports to check their e-mail messages and to explore the Internet. Soon they may pack a new type of telephone and take it along, too, to make inexpensive calls using those wireless connections. The phones are called voice over Internet protocol over Wi-Fi (or, simply, voice over Wi-Fi) handsets. Like conventional voice over Internet protocol, or VoIP, services, they digitize the voice and send it as data packets over the Internet. But they do it wirelessly, over an 802.11, or Wi-Fi, network. And also like conventional VoIP, the technology may become popular with people who want to economize on their long-distance bills by using Wi-Fi connections when possible."

  38. The economics of netcasting, by Mark Pesce

    “A broadcaster spends the same amount of money whether 10 people or 10 million are watching a broadcast, because the broadcast tower reaches all who want to tune into it. The economics for netcasting are quite different. Anyone can set up a server to send out ten simultaneous program streams – but it requires a million times the infrastructure and bandwidth to serve the same program to 10 million people. Or it used to. The BBC doesn't have the bandwidth to netcast its programming to all 66 million of its viewers. Fortunately it doesn't that kind of capability, because the BBC has cleverly designed the Flexible TV application to act as a node in a Peer-to-Peer network. Anyone using Flexible TV has access to the programs which have been downloaded by any other Flexible TV client, and can get those programs directly from them. All BBC need do is provide a single copy of a program into the network of P2P clients, and they handle the work themselves. More than this, because of the P2P technology used by the BBC (more on this in a moment) a Flexible TV user can get a little bit of the program from any number of other peers; rather than going through the process of downloading an entire program from one other peer, the Flexible TV client can ask a hundred other clients for small sections of the program, and download these hundred sections simultaneously. Not only does this decrease the amount of traffic that any clients has to handle, it also means that it produces a virtuous cycle: the more popular a program is, the more copies of it will exist in the network of peers, and therefore the more easily a peer can download it. In other words, the BBC has cracked the big problem which has prevented netcasting from taking off. In this system of "peercasting" the network is actually more efficient than a broadcast network, because more than one program can be provided simultaneously, and failure in any one point in the network doesn't bring the network down.

  39. P2P as the necessary model for interactive TV:

    Fortune magazine uncovered yet another aspect of the coming peer to peer age in technology, by pointing out that the current ‘central server based’ methods for interactive TV are woefully inadequate to match supply and demand:

    “Essentially, file-served television describes an Internet for video content. Anyone—from movie company to homeowner—could store video on his own hard disk and make it available for a price. Movie and television companies would have tons of hard disks with huge capacities, since they can afford to store everything they produce. Cable operators and satellite companies might have some hard disks to store the most popular content, since they can charge a premium for such stuff. And homeowners might have hard disks (possibly in the form of PVRs) that can be used as temporary storage for content that takes time to get or that they only want to rent—or permanent storage for what they've bought.”
    (http://www.fortune.com/indexw.jhtml?channel=artcol.jhtml&doc_id=208364 )

    "The new TiVo technology, which will become a standard feature in its video recorders, will allow users to download movies and music from the Internet to the hard drive on their video recorder. Although the current TiVo service allows users to watch broadcast, cable or satellite programs at any time, the new technology will make it possible for them to mix content from the Internet with those programs.”
    (personal communication)

  40. Review of U.S.-based TV-IP developments, in The Washington Post:

    “Now comes a fresh group of contenders for the Internet TV throne, all trying new twists on sending video over the global computer network. They carry funky names, too, like Akimbo, DaveTV, RipeTV and TimeshifTV. All are trying to exploit the increasing number of high-speed Internet links in homes and the declining costs for transmitting and storing digital video Some offer personalized entertainment networks, ones you or I create by mixing and matching niche programs that appeal to our inner couch-potato. Like TiVo, the digital recorder company, these services are trying to break away from the static program lineups that dominate today's TV. Unlike earlier Web video networks – flops such as Pseudo.com and Digital Entertainment Network – today's contenders collect content from other companies rather than producing their own. Most of the new players are operating on the fringes of the Internet video free-for-all. That's because virtually all the leading cable and satellite companies, along with the movie studios, are rushing to develop their own video-on-demand services. “

    European TVIP plans reviewed by Wired

    “The BBC is quietly preparing a challenge to Microsoft and other companies jostling to reap revenues from video streams. It is developing code-decode (codec) software called Dirac in an open-source project aimed at providing a royalty-free way to distribute video. The sums at stake are potentially huge because the software industry insists on payment per viewer, per hour of encoded content. This contrasts with TV technology, for which viewers and broadcasters alike make a one-off royalties payment when they buy their equipment. Tim Borer, manager of the Dirac project at the BBC's Kingswood Warren R&D lab, pointed out: 'Coding standards for video were always free and open. We have been broadcasting PAL TV in this country for decades. The standard has been available for anyone to use... If the BBC had to pay per hour of coding in PAL we would be in trouble.'
    ( http://www.wired.com/news/technology/0,1282,65105,00.html?)

  41. Overview of some digital radio developments and recording and management tools

    Radio Your Way

    “It's bizarre that five years into the digital video-recorder era, you still can't buy a digital VCR for radio. Why has the electronics industry developed so many machines that let us time-shift Dr. Phil and "Saturday Night Live," but so few that do so for Dr. Joy Browne and "Science Friday"?Actually, there is one such device. Radio YourWay (pogoproducts.com) looks at first glance like a pocket-size (2.2 by 3.9 by 0.7 inches) AM-FM transistor radio, which, in part, it is. But it also contains a built-in timer, so that you can set up a schedule for recording radio broadcasts. Programming it is exactly as easy – or as difficult – as programming a VCR, except that it uses a military-style 24-hour clock instead of AM and PM designations. At the specified time, the radio turns itself on. It tunes in the station, records for the requested interval and then turns off.Once you've captured a show, you can play it back at a more convenient time (or in an area with no reception), pause it while you take a shower or a meeting, fast-forward through the ads, or even archive it to a Windows PC using a U.S.B. cable."
    (http://www.nytimes.com/2004/02/26/technology/circuits/26stat.html?th )


    "AudioFeast: Radio listeners looking for on-demand access to talk and music programs might want to consider a new Internet service that records radio shows. Like a kind of TiVo for Internet radio, AudioFeast can be set to save hundreds of shows, from "Washington Journal" to "Stamp Talk," and manage their transfer onto certain audio players. AudioFeast carries news, weather, business and entertainment programs from dozens of media partners, including National Public Radio, the Arts and Entertainment Network, and The Wall Street Journal. Operating until recently as Serenade Systems, AudioFeast also offers 100 music channels in 16 genres, including blues, jazz and electronica. AudioFeast costs $49.95 a year; a free 15-day trial is available at www.audiofeast.com"

    (quote from http://www.nytimes.com/2004/09/16/t...?pagewanted=all)


    "Created by 35-year-old Canadian programmer Scott MacLean, TimeTrax allows subscribers of XM Radio's satellite radio service to record music off the radio, appending track title and artist information to each song. Fans of indie rock could, for example, cue their satellite radio receivers to an indie rock station, click on Record in the TimeTrax software, go to sleep, and wake up the next day with eight hours' worth of music by the likes of The Fiery Furnaces and Spoon. What's more, users can schedule the software to record a certain channel at a certain time, much the same way people can program a VCR or a TiVo to record a TV show while they're on vacation or at work. Right now the service only works with XM Radio on a device called the PCR, which the company sold so users could listen to satellite radio in their homes instead of just in their cars. Since TimeTrax came out, XM Radio discontinued the device, creating a lucrative market on eBay where the $49 retail units are selling for more than $350. MacLean says that the program has been downloaded about 7,000 times in the two weeks that it has been available. TimeTrax is on the forefront of what will likely be the music and technology industry's next world war: the recording of broadcast digital audio. "We're at the beginning of the next P2P," says Jim Griffin, CEO of Cherry Lane Digital, a music and technology consultancy. "Peer-to-peer is small by comparison." What has Griffin and others interested is the concept that when radios all broadcast digital music signals, programs such as TimeTrax will allow users to search for and capture songs similar to how they do it today with programs such as Kazaa. Instead of grabbing a song from someone's hard drive, users will pluck it from the air via a digital radio signal. It's a new situation, which in part is what makes TimeTrax such an interesting case."
    (quote from http://www.nytimes.com/2004/09/16/t...?pagewanted=all)

    Audio Xtract

    "connects you to a database of Internet radio stations that can be sorted by genre or bandwidth. Once you've found one that appeals to you, just click on Record. The software enables the computer to record the material in the form of individual MP3 files and stores them in a folder. The files are named according to their content, making it easy to delete those – like commercials – you don't want. Because the contents are recorded as MP3 files, they can be played on computers and portable media players and burned onto CD's. Audio Xtract is $50 at www.audioxtract.com"
    (quote from http://www.nytimes.com/2004/09/16/t...?pagewanted=all)

  42. Shoutcast aims to enable the setting up of streaming radio broadcasts on the internet, see http://www.shoutcast.com/

  43. Business Week, on the future of internet telephony, at http://www.businessweek.com/technology/tc_special/04voip.htm

    The new generation of VoIP telephones, reviewed at http://www.nytimes.com/2005/05/05/technology/circuits/05basics.html?

  44. See http://en.wikipedia.org/wiki/LAMP

  45. The history of social software and related earlier concepts (groupware, etc..) is narrated in this excellent overview, at http://www.lifewithalacrity.com/2004/10/tracing_the_evo.html

    The development of 'gifting technologies' is described here at http://www.firstmonday.org/issues/issue9_12/mcgee/index.html

  46. Examples of MoSoSo, mobile social software

    "Typically, users set up a profile listing interests, hobbies and romantic availability. They also state what kind of people they'd like to meet. Because the service is tied to a mobile device, it knows when people with similar interests are near each other. Not surprisingly, MoSoSos are ideal for hooking up young, active professionals tied to their mobile phones or laptops, and they're starting to take off. Here are some of the leading players:

    Dodgeball: Currently rolled out in 22 U.S. cities, and with about 15,000 users, dodgeball is the American MoSoSo standard-bearer. It works, explained founder Dennis Crowley, by having users check in with text messages announcing where they are. Then, because dodgeball maintains a database of hundreds of nightspots in each city, anyone on a user's friends list who is within 10 blocks gets a message that his or her pal is nearby. The service also has a "crush" feature. Users view profiles of other members and designate ones they'd like to meet. If the object of a crush is nearby, he or she gets a message to that effect. The system maintains privacy by identifying users only by screen names. "I can't tell you how many people I've met through this," said McGunigle. "It has not only simplified my socializing habits, but has allowed me to meet people I would not have met otherwise."

    Playtxt: Playtxt's 6,000 members key in the postal code where they want to be found when they're on the go. Then, like dodgeball, anyone can see which friends, or friends of friends, are within that postal code."
    (http://www.wired.com/news/culture/0,1284,66813,00.html? )

  47. Shinkuro, at http://www.shinkuro.com/

  48. How P2P unblocks information gluts in centralized servers, by programmer Stephane Champallier

    "Prenons un exemple, je crée un logiciel de "groupware" (par ex, Exchange). Dans une situation standard, j'ai un serveur et une ribambelle de clients qui s'y connectent. Si un client A veut envoyer un mail à un client B, alors A envoit le mail au serveur. Ce dernier transmet ensuite le mail au client B. A présent, imagine qu'il y a 10 clients et que chacun envoit un mail à tous les autres. Un petit calcul te dira que le serveur recevra 45 messages et en enverra 45. Imaginons maintenant qu'il y a 100 clients. Le même scénario impliquera 2x4950 messages. En gros, en multipliant le nombre de client par 10, j'ai multiplié le nombre de messages qui transitent par le serveur par 100. Si j'avais multiplié par 100, j'aurais multiplié les messages côté serveur par 10000. Ce que ce petit calcul nous dit c'est que si tout passe par le serveur, celui-ci va rapidement être engorgé. Une autre analogie c'est le rond point. Si tu ajoutes des rues(clients) et des voitures(messages) qui aboutissent au rond point, celui-ci va rapidement s'engorger.

    Pour solutionner ça, le P2P fait en sorte que chaque clients puissent s'adresser directement à son destinataire, sans passer par un serveur. Dans notre premier scénario, chaque client reçoit et envois 9 messages, le serveur 45. En P2P, chaque client reçoit et envois 9 messages, le serveur "rien". Je mets "rien" entre guillemets car les choses ne sont pas aussi simple. On a donc réparti la charge de travail de manière plus uniforme et éliminer un goulot. C'est ça l'intérêt technique : une meilleur répartition de la charge."
    (personal communication, March 2005)

  49. In the following website, Andrew Feenberg discusses technological determinism, a critical theory of technology, the technical code as the locus of social struggle, and places all that and more in the context of earlier thinkers such as Heidegger, Habermas, Baudrillard, Virilio and others, which he explains with great clarity. Click on the essays at the bottom of the webpage under the heading, ‘Some Background Texts and Applications’.
    (URL = http://www.sfu.ca/~andrewf/ )

  50. Cornelis Castoriadis on the mutual imbrication of the techno-social:

    "Organisation sociale et technique sont deux termes qui expriment la creation et l'autoposition d'une societe donnee: dans l'organisation sociale d'ensemble, fins et moyens, significations et instruments, efficacite et valeur ne sont pas separable. Toute societe cree son monde, interne et externe, et cette creation n'est ni instrument ni cause, mais 'dimension' partout presente. (p. 307)

    "Le monde moderne est sans doute determine, a une foule de niveaux, par sa technologie; mais cette technologie n'est rien d'autre qu'une des expressions essentielles de ce monde, son 'langage', a l'egard de la nature exterieure et interieure." (p. 311)

    Source: Cornelis Castoriadis. L'institution imaginaire de la societe. Seuil (Points /Essais), 1975

  51. A.Y. Aulin-Ahmavaara, "The Law of Requisite Hierarchy", Kybernetes, Vol. 8 (1979), p. 266