P2P and Human Evolution Ch 3
3. P2P in the Economic Sphere
Chapter 3 of P2P and Human Evolution
- 1 3. P2P in the Economic Sphere
- 1.1 3.1.A Peer production as a third mode of production and new commons-based property regime
- 1.2 3.1.B The Communism of Capital, or, the cooperative nature of cognitive capitalism
- 1.3 3.1.C The Hacker Ethic or ‘work as play’
- 1.4 3.2 Explaining the Emergence of P2P Economics
- 1.5 3.3 Placing the P2P Era in an evolutionary framework
- 1.6 3.4 Placing P2P in an intersubjective typology
- 1.7 More Information
3.1.A Peer production as a third mode of production and new commons-based property regime
There are two important aspects to the emergence of P2P in the economic sphere. On the one hand, as a format for peer production processes (called ‘Commons-based peer production' or CBPP by Yochai Benkler) it is emerging as a 'third mode of production' based on the cooperation of autonomous agents. Indeed, if the first mode of production is free-market based capitalism, and the second mode was the now defunct model of a centrally-planned state-owned economy, then the third mode is defined neither by the motor of profit, nor by any central planning. In order to allocate resources and make decisions, it is neither using market and pricing mechanisms, nor managerial commands, but social relations.
The second aspect, as the juridical underpinning of software creation, in the form of the GNU General Public License, or as the Creative Commons license for other creative content, it is engendering a new commons-based intellectual property regime. Taken together the GPL, the Open Source Initiative and the Creative Commons, together with associated initiatives such as the Art Libre license, may be seen as providing the 'legal' infrastructure for the emergence and growth of the P2P social formation. Peer production proper covers the first aspect: freely cooperating producers, governing themselves through peer governance, and producing a new type of universal common goods. The second aspect, mostly as free software and open sources, is the result of that process, but not necessarily. It is possible that corporations would produce free software (accessible and modifiable for free), in a more traditional way, or in a hybrid way, now that many large corporations are embracing open sources, this is increasingly the case.
But what is important for us is the following: worldwide, groups of programmers and other experts are engaging in the cooperative production of immaterial goods with important use value, mostly new software systems, but not exclusively. And as we will see later, peer production is much broader than software, it emerges throughout the social field. The new software, hardware and other immaterial products thus being created are at the same time new means of production, since the computer is now a universal machine ‘in charge of everything’ (every productive action that can be broken down in logical steps can be directed by a computer). Access to computer technology is distributed, and thus widely affordable given a minimum of financial means, and technological literacy. This means that the old dichotomy, between workers and the means of production, is in the process of being overcome for certain areas of fixed capital, and that the emergence of the viral communicator model, technological meshworks, is extending this model of distributed access to fixed capital assets, to more and more areas. Important to note is that software is 'active text' which directly results in 'processes'. In other words, software is not just an immaterial pursuit, but can actively direct material and industrial processes. As a cooperation format, we will discuss it in more detail in the section 'Advantages of the peer production model'. Peer governance models will also be discussed elsewhere.
A further important aspect of peer production is the creation of universal public goods, i.e. the emergence of new common property regimes. As the creation of a new type of commons, it takes the form of either the Free Software Movement ethos, as defined by Richard Stallman (Stallman, 2002), or in the form of Open Source projects, as defined by Eric Raymond (Raymond, 2001). Both are innovative developments of copyright that significantly transcend the implications of private property and its restrictions. However, the ethos underlying both initiatives is different, While the Free Software Foundation insists that its production is not for exchange on the market, and not to be converted into private property, the Open Source Iniative aims to be compatible with the market and business thinking and stresses the efficiency argument which results from a public domain of software.
Free software is essentially 'open code'. Its General Public Licence says that anyone using free software must give subsequent users at least the same rights as they themselves received: total freedom to see the code, to change it, to improve it and to distribute it. There is some discussion as to whether Free Software must be 'free', in the sense of free beer. While its spokesmen, including Richard Stallman, clearly say that it is okay to charge for such software, the obligation of free distribution makes this a rather moot argument. The companies that sell software, such as Red Hat which sells a version of Linux, could be said to charge for the services attached to its installation and use, rather than for the freely distributable software. This is an important argument for those stressing, as I do, the essential non-mercantile nature of free software. But in any case, if in a for-profit enterprise software is developed so that it can be sold as a product, in the case of free software, if it is sold by non-commercial entities, it is most often as a means of producing more software, to strengthen the community and obtain financial independence to continue further projects.
FS explicitly rejects the ownership of software, since every user has the right to distribute the code, and to adapt it and is thus explicitely founded on a philosophy of participation and 'sharing'. Open Source is admittedly less radical: it accepts ownership of software, but renders that ownership feeble since users and other developers have full right to use and change it. But since the OS model has been specifically designed to soften its acceptance by the business community which is now increasingly involved in its development, it generally over a lot more control of the labour process. OS licenses allow segments of code to be used in proprietary and commercial projects, something impossible with pure free software. But even free software projects have become increasingly professionalised, and it now generally consists of a core of often paid professionals, funded by either nonprofits or by corporations having an interest in its continued expansion; they also use professional project management systems, as is the case for Linux. Despite their differences, or essential likeness – a matter of continuous debate in both FS and OS communities – I will use both concepts more for their underlying similarity, without my use denoting a preference, but on a personal level would be probably closer to the free software model, which is the 'purer' form of commons-based peer production.
Despite its rootedness as a modification of intellectual property rights, both do have the effect of creating a kind of public domain in software, and can be considered as part of the information commons. However, the GPL does that by completely preserving the authorship of its creators. Free software and open sources are exemplary of the double nature of peer to peer that we will discuss later: it is both within the system, but partly transcends it. Though it is increasingly attractive to economic forces for its efficiency, the profit motive is not the core of why these systems are taken up, it is much more about the use value of the products. You could say that they are part of a new 'for-benefit' sector, which also includes the NGO's, social entrepreneurs and what the Europeans call 'the social economy', and that is arising next to the 'for-profit' economy of private corporations. Studies show that the personal development of participants are primary motives, despite the fact that quite a few programmers are now paid for their efforts. Open Source explicitly promotes itself through its value to create more efficient software in the business environment. It is even being embraced by corporate interests such as IBM and other Microsoft rivals, as a way to bypass the latter's monopoly, but the creation of an open infrastructure is clearly crucial and in everyone’s interest. But through the generalization of a cooperative mode of working, and through its overturning of the limits of property, which normally forbids other developers and users to study and ameliorate the source code, it is beyond the property model, contrary to the authoritarian, bureaucratic, or 'feudal' modes of corporate governance; and beyond the profit motive. We should also note that we have here the emergence of a mode of production that can be entirely devoid of a manufacturer. In the words of Doc Searls, senior editor of Linux magazine, we see the demand-side supplying itself.
Seen from the point of view of capitalism or private for-profit interests, commons-based peer production has the following advantages:
- it represents more productive ways of working and of mobilizing external communities to its own purposes;
- it represents a means of externalizing costs or of lowering transaction costs;
- it represents new types of business models based on 'customer-made production', such as eBay and Amazon;
- it represents new service-based business models, where by free software is used as the basis of providing surrounding services (Red Hat);
- it represents a common shared infrastructure whose costs and building is taken up largely by the community and which prohibits both monopolistic control by stronger rivals as well as providing common standards so that a market can develop around it.
In all these senses FS/OS forms of peer production are 'within the system'.
We should also stress the dependence of the peer production community to the existing system. Since producers are not paid for their services, they have to work within the mainstream economy: for the government or academia, for traditional corporations, running their own individual or small business, or moving from project to project. Thus, despite its growth, peer production is still relatively weak. Though it outcompetes its for-profit rivals in efficiency, though it increases the welfare of its producers, though it creates important use value, it only covers part of the economy, mostly immaterial processes, while the mainstream, capitalist economy, functions as a full system. In this sense, peer to peer is immanent in the system, and productive of capitalism itself, as we have shown in the first chapter. But it is also more than that, a transcendent element that goes beyond the larger system of which it is a part. It is a germ of something new: it still goes 'beyond' the existing system.
To summarise the importance of the 'transcending' factors of Commons-based peer production:
- it is based on free cooperation, not on the selling of one's labour in exchange of a wage, nor motivated primarily by profit or for the exchange value of the resulting product;
- it not managed by a traditional hierarchy;
- it does not need a manufacturer;
- it's an innovative application of copyright which creates an information commons and transcends the limitations attached to the property form.
How widespread are these developments? Open-source based computers are already the mainstay of the internet’s infrastructure (Apache servers); Linux is an alternative operating system that is taking the world by storm. It is now a practical possibility to create an Open Source personal computer that exclusively uses OS software products for the desktop, including database, accounting, graphical programs, including browsers such as Firefox. It is recognized as its main threat by the current operating system monopoly Microsoft. As a collaborative method to produce software, it is being used increasingly by various businesses and institutions. Wikipedia is an alternative encyclopedia produced by the internet community which is rapidly gaining in quantity, quality, and number of users. And there are several thousands of such projects, involving at least several millions of cooperating individuals. If we consider blogging as a form of journalistic production, then it must be noted that it already involves between 5 and 10 million bloggers, with the most popular ones achieving several hundred thousands of visitors. We are pretty much in an era of ‘open source everything’, with musicians and other artists using it as well for collaborative online productions. In general it can be said that this mode of production achieves ‘products’ that are at least as good, and often better than their commercial counterparts. In addition, there are solid reasons to accept that, if the open source methodology is consistently used over time, the end result can only be better alternatives, since they involved mobilization of vastly most resources than commercial products.
Open source production operates in a wider economic context, of which we would like to describe ‘the communism of capital’, with ‘the hacker ethic’ functioning as the basis of it’s new work culture.
Figure – Choosing for a Open Source Desktop
Nature of Program
Free Software / Open Source Alternative
Desktop Operating System
Linspire Lindows, Gnome, or BeOS Max
OpenOffice or Gnome Office
IBM Lotus Notes
Horde Project, or Net Office Project
Twiki, Druid, Gnome DB
Esher VSI Fax
HylaFax or Mgetty+Sendfax
3.1.B The Communism of Capital, or, the cooperative nature of cognitive capitalism
In modernity, the economic ideology sees autonomous individuals entering into contracts with each other, selling labour in exchange for wages, exchanging commodities for fair value, in a free market where the ‘invisible hand’ makes sure that the private selfish economic aims of such individuals, finally contribute to the common good. The ‘self’ or subject of economic action is the company, led by entrepreneurs, who are the locus of innovation. Thus we have the familiar subject/object split operating in the economic sphere, with an autonomous subject using and manipulating resources.
This view is hardly defensible today. The autonomous enterprise has entered a widely participative field that blurs clear distinctions and identities. Innovation has become a very diffuse process. It is linked with its consumers through the internet, today facing less a militant labour movement than a ‘political consumer’ who can withhold his/her buying power with an internet and blogosphere able to damage corporate images and branding in the very short term through viral explosions of critique and discontent. It is linked through extranets with partners and suppliers. Processes are no longer internally integrated, as in the business process re-engineering of the eighties, but externally integrated in vast webs of inter-company cooperation. Intranets enable widespread horizontal cooperation not only for the workers within the company, but also without. Thus, the employee is in constant contact with the outside, part of numerous innovation and exchange networks, constantly learning in formal but mostly informal ways. Because of the high degree of education and the changing nature of work which has become a series of short-term contracts, a typical worker has not in any real sense gained his essential skills and experience within the company that he is working for at any particular moment, but expands on his skill and experience throughout his working life. Innovation today is essentially 'socialized' and takes place 'before' production, or 'after' production, with reproduction being at marginal cost concerning immaterial goods, and even if costly in the material sphere, being just an execution of the design phase.
Moreover, because the complexity, time-based, innovation-dependent nature of contemporary work, for all practical terms, work is organized as a series of teams, using mostly P2P work processes. In fact, as documented very convincingly by Eric von Hippel, in his book "The Democratisation of Innovation" (Von Hippel, 2004), innovation by users (and particularly by what he calls 'lead users') is becoming the most important driver of innovation, more so than internal market research and R & D divisions. It is subverting one of the mainstays of the division of labour. Commentators have noted that the whole dichotomy between professionals and amateurs is in fact dissolving, giving rise to the phenomena of 'citizen engineers'. Users, better than the scientists, know what they need and now have the skills to develop solutions for themselves, using other users for peer support. These user innovation communities are very important in the world of extreme sports such as windsurfing for example, in technology and online music, and in an increasing number of other areas. In May 2005, Trendwatching.com, a business-oriented innovation newsletter using thousands of spotters worldwide, devoted a whole issue to the topic of 'Customer-made innovation', highlighting several dozen examples in all sectors of the economy. These trends will be greatly strengthened with the further development of 'personal fabricator' technology. But even before this, the process of creating an infrastructure for this type of do-it-yourself economy is proceeding apace.
The smarter companies are therefore consciously breaking down the barriers between production and consumption, producers and consumers, by involving consumers, sometimes in a explicit open-source inspired manner, into value creation. Think of how the success of eBay and Amazon are linked to their successful mobilization of their user communities: they are in fact integrating many aspects of commons-based peer production. There are of course important factors, inherent in the functioning of capitalism and the format of the enterprise, which cause structural tensions around this participative nature, and the use of P2P models, which we will cover in our explanatory section. The same type of user-driven innovation has also been noted in advertising. Accordingly, new business management theories are needed, which Thomas Malone calls "Coordination Theory", and it involves studying (and organizing accordingly) the dependencies and relationships within and without the enterprise. Not surprisingly this research into 'organisational physics' is also done through open source methods. Apart from 'vanguard corporations' (see my thesis on netarchical capitalism) that incorporate peer production as an essential component of their activities, there is a broad shift towards a new attitude towards consumers, with many associated phenomena. Management theorists with a feeling for these trends argue that a radical shift is occurring, and needs to occur, in the managerial class, in order to be able to capitalize on these developments. David Rotman of the Rotman School of Management argues that they have to become businesspeople will have to become "more 'masters of heuristics' than 'managers of algorithms'". Books describing this shift are Daniel Pink's A Whole New Mind: Moving from the Information Age to the Conceptual Age, and C.K. Prahalad's The Future of Competition: Co-Creating Unique Value with Customers.
So the general conclusion of all the above has to be the essentially cooperative nature of production, the fact that companies are drawing on this vast reservoir of a 'commons of general intellectuality', without which they could not function. That innovation is diffused throughout the social body. That, if we accept John Locke's argument that work that adds value should be rewarded, then it makes sense to reward the cooperative body of humankind, and not just individuals and entrepreneurs. All this leads quite a few social commentators, from both left and liberal (free enterprise advocates), to bring the issue of the universal wage on the agenda and to retrieve the early Marxian notion of the 'General Intellect'.
Why do we speak of ‘cognitive capitalism’? For a number of important reasons: the relative number of workers involved in material production is dwindling rather rapidly, with a majority of workers in the West involved in either symbolic (knowledge workers) or affective processing (service sector) and creation (entertainment industry). The value of any product is mostly determined, not by the value of the material resources, but by its level of integration of intelligence, and of other immaterial factors (design, creativity, experiential intensity, access to lifeworlds and identities created by brands). The immaterial nature of contemporary production is reconfiguring the material production of agricultural produce and industrial goods. In terms of professional ‘experience’, more and more workers are not directly manipulating matter, but the process is mediated through computers that manage machine-based processes.
But the most important argument as to the existence of a third phase of Cognitive capitalism is therefore a hypothesis that the current phase of capitalism is distinct in its operations and logic from earlier forms such as merchant and industrial capitalism. It is based on the accumulation of essentially knowledge assets. Instead of the cycle conception–production–distribution–consumption, we have a new cycle conception – reproduction of the informational core – production – distribution. The key is now to possess an informational advantage, in the form of intellectual property, and that can be embedded in immaterial (software, content) or material (seeds, pharmaceuticals, biotechnology) products. The production itself can be outsourced and is no longer central to competitive advantage. And because the advantage is in the information, it is protected through monopolies, enforced by the state. This in turn leads to increasing and protected profits, with prices no longer bearing any necessary relation to the production cost. This fact is true for seeds, pharmaceuticals, software, content products, biotechnology, etc… These inflated profits in turn have put an enormous pressure on the totality of the economy.
According to the hypothesis of cognitive capitalism, there are three main approaches in analyses of the current political economy:
- ‘neo-classical economics’ seeks for the laws of capitalism ‘as such’, and is today much involved in creating models and mathematizing them; according to CC theorists, it lacks a historical model to take into account the changes.
- Information economy models claim that information/knowledge has become an independent third factor of production, changing the very nature of our economy, making it ‘post-capitalist’
- In between is the hypothesis of cognitive capitalism, which, though it recognizes that we have entered a new phase, a third ‘cognitive’ phase, it is still within the framework of the capitalist system.
What CC-researchers are building on is an earlier and still very powerful school of economic theory, known as the Regulation School and especially strong in France (M. Aglietta), which considers that, despite differences in national models, there are commonalities in the structural evolution of the capitalist system, that it has been characterized by different ‘regimes’ which each had their particular modes of ‘regulation’ (forms of balancing the inherent instability of the system). It was they, who focused most on the theories of post-Fordism, arguing that after 1973, the Taylorist-Fordist system of organizing work and the economy (with as its corollary Keynesianism) had been replaced by new systems of organizing work and regulating the economy.
McKenzie Wark’s Hacker Manifesto (Wark, 2004) goes one step further in this analysis and argues that not only is the key factor of the new era ‘information as property’, but with it comes the creation of a new ruling class and a new class configuration altogether. While the capitalist class owned factories and machinery, once capital was abstracted in the form of stocks and information, a new class has arisen which controls the ‘vectors of information’, the means of producing, storing and distributing information, the means to transform use value in exchange value. This is the new social force he calls the ‘vectoralist’ class. The class who actually produces the value (as distinct from the class that can ‘realise’ it and thus captures the surplus value), he calls the hacker class. It is distinguished from the former because it actually creates new means of production: hardware, software, new knowledge (wetware). See 3.3.D for a fuller explanation of the different interpretations of the current political economy, of which P2P is a crucial element.
However, we believe that though the cognitive capitalism and vectoralist class arguments are key to understand the current era, it is not sufficient, and we will put forward our own hypothesis that will help in understand the emerging future: the emergence of a netarchical class, which is not dependent on either knowledge assets or information vectors, but enables and exploits the networks of participatory culture. See section 3.4.E for a full explanation of this idea.
3.1.C The Hacker Ethic or ‘work as play’
In section 3.2 we will attempt to show the contradictory nature of the relationship between capitalism and peer to peer processes. It needs P2P to thrive, but is at the same threatened by it. A similar contradiction takes place in the sphere of work. We said before how in the industrial, ‘Fordist’ model, the worker was considered an extension of the machine. Another way of saying this, is that intelligence was located in the process, but that the worker himself was deskilled, he was required to be a ‘dumb body’, following instructions. The worker had to sell his labour in order to survive, and meaning could only be found in the activity of working itself, as a means of survival for the family, as a way of social integration, as a means of obtaining identity through one’s social role. But finding meaning in the content of the work itself was exceptional. In post-Fordism important changes and reversals occur. Today, the worker is supposed to communicate and cooperate, to have a capacity to solve problems. He is required not only to use his intelligence, but also has to engage his full subjectivity. Certainly this increases the possibility to find fulfillment and meaning through work, but that would be to paint a too rosy picture. Inside the company, the quest for fulfillment is often contradicted by the empty purpose of the company itself, especially as efficiency thinking, short termism and a sole focus on profit, are taking hold as the main priorities. Peer to peer processes characteristic of the project teams are in tension with the hierarchical, feudal-like nature of the management by objectives models, whose 'information scarcity'-based model is becoming counterproductive even on capital's own terms. Psychological pressure and stress levels are very high, since the worker has now full responsibility and very high targets. One could say that instead of exploiting the body of the worker, as was the case in industrial capitalism, it is now the psyche being exploited, and stress-related diseases have replaced industrial accidents. But this is not all: the productivity model and modes of efficiency thinking have left the factory to diffuse throughout society. It is not uncommon to manage one’s family and children and household according to that model. Dual-career parents come home tired and stressed to children that have spent their day time in institutions since their very early age and have little occasion to spend 'quality time' together; and are managed (or manage themselves) like 'human resources' in a very competitive environment. An increasing number of human relations (such as dating) and creative activities have been commoditized and monetized. As the pressure within the corporate timesphere intensifies through the hypercompetition based model of neoliberalism, learning and other necessary activities to remain creative and efficient at work have been exported to private time. Thus, paradoxically, the Protestant work ethic has been exacerbated, or as Pekka Himanen (Himanen,2001) would have it in his Hacker Ethic, there has been a ‘Friday-isation of Sunday’ going on. In other words, the values and practices of the productive sphere, the sphere of the work week including Friday, defined by efficiency, have taken over the private sphere, the sphere of the weekend, Sunday, which was supposed to be outside of that logic. But even within the corporate sphere itself, these developments have lead to a widespread dissatisfaction of the workforce. Interesting work is being done in investigating the new forms of network sociality, as for example by Andreas Wittel, but he also writes that this form of sociality, which he contrasts with community, is geared to the creation and protection of proprietary information. This is in sharp contrast with the Peer to Peer sociality, and thus, focuses on the exacerbation of the Protestant work ethic, and its cultural effects, rather than on the reaction against it. Similary, Pekka Himanen will not distinguish between the entrepreneurs and the knowledge workers.
And this is precisely the important hypothesis of a Peer to Peer sociality: new subjectivities and intersubjectivities (which we will discuss later), are creating a counter-movement in the form of a new work ethic: the hacker ethic (see also Kane, 2003). As mass intellectuality increases through formal and informal education, and due to the very requirements of the new types of immaterial work, meaning is no longer sought in the sphere of salaried work, but in life generally, and not through entertainment alone, but through creative expression, through ‘work’, but outside of the monetary sphere. Occasionally, and it was especially the case during the new economy boom, companies try to integrate such methods, the so-called ‘Bohemian’ model. This explains to a large part the rise of the Open Sources production method. In the interstices of the system, between jobs, on the job when there is free time, in academic circles, or supported by social welfare, new use value is being created. Or more recently, by rival IT companies who are understanding the efficiency of the model and seeing it as a way to break the monopoly of Microsoft software. But it is done through a totally new work ethic, which is opposed to the exacerbation of the Protestant work ethic. And as it was first pioneered by the community of ‘passionate programmers, the so-called hackers, it is called ‘the hacker ethic’. Himanen (Himanen, 2004) explains a few of its characteristics:
"time is not rigidly separated into work and non-work; intensive work periods are followed by extensive leave taking, the latter necessary for intellectual and creative renewal; there is a logic of self-unfolding at work, workers look for projects at which they feel energized and that expands their learning and experience in desired directions; participation is voluntary; learning is informal and continuous; the value of pleasure and play are crucial; the project has to have social value and be of use to a wider community; there is total transparency, no secrets; there is an ethic that values activity and caring; creativity, the continuous surpassing of oneself in solving problems and creating new use value, is paramount"
In open source projects, these characteristics are fully present; in a for-profit environment they may be partly present but enter into conflict with the different logic of a for-profit enterprise.
3.2 Explaining the Emergence of P2P Economics
3.2.A Advantages of the free software/open sources production model
Why is are free cooperative projects of autonomous agents, i.e. peer production models, emerging now? Part of the explanation is cultural, located in a changing set of values affecting large parts of the population, mostly in the Western world. The World Values research by R. Inglehart (Inglehart, 1989) has shown that there is a large number of people who identify with post-material values and who have moved up in the ‘hierarchy of values’ as defined by Abraham Maslow. For those people who feel relatively secure materially, and are not taken in by the infinite desires promoted by consumer society, it is inevitable that they will look to other means of fulfillment, in the area of creation, relationships, spirituality. The demand for free cooperation in a context of self-unfolding of the individual, is a corollary of this development. Just as the development of filesharing is related to the existence of an abundance of unused computing resources due to the differential between computer processing and human processing (the fact that the latter is much slower creates the abundance in PC-resources), P2P as a cultural phenomena is strongly related to the development of a mass intellectually and the resulting abundance in creative resources. Not only underemployment of these resources, but also the growing dearth of meaning associated with working for a consumption-oriented corporation, creates a surplus of creative labour that wants to invest in meaningful projects associated with the direct creation of use value.
Apart from these cultural and 'subjective' reasons, there is of course the availability of a global technological framework for non-local communication, coordination and cooperation, it is strongly linked to the emergence of the internet. As we have outlined in our introduction, there is now a peer to peer infrastructure available through distributed computing, an alternative media and communication infrastructure, and a platform for global autonomous cooperation. In general, we can say that it is access to distributed capital goods that allows for the generation of bottom-up ad hoc networks of people and devices. This fact that 'capital outlays' can be generated without recourse to access to financial capital or the means provided by the state or corporations, is itself a huge advantage.
There are other good objective reasons that drive the adoption of 'open collaborative processes: the very 'diffuse' nature of contemporary innovation works against individual appropriation, since there are myriads of inputs necessary to produce a given output, and were that output to be frozen through rigid intellectual protection, it would stifle the innovation process, and put these entities at a competitive disadvantage.
By abolishing distinctions between producer and consumer, open source processes dramatically increase their access to expertise, to a global arena networked through the internet. No commercial entity can afford such a large army of volunteers. So one very clear advantage is the availability of a much larger pool of intelligence which can be devoted to problem-solving. Peer production, though often taken place through a large number of small teams, also allows for swarming tactics, i.e. the coordinated attention of many people. It is sometimes called the 'piranha effect' as it involves repeated tugging of code or text by many different people, until the result is 'right' and communally validated. Commercial software, which forbids other developers and users from ameliorating it, is much more static in its development and has many other flaws. With FLOSS (= Free/Libre Open Source Software) projects, any user can participate, at least through a bug report, or by offering his comments. This 'flexible degree of involvement' is a very important characteristic of commons-based peer production, which usually combine a very motivated core who operate in a onion-like structure surrounded by, a flexible periphery of co-developers and occasional collaborators, with many degrees in between, and all have the possibility of permanently 'modulating' their contributions for optimal fit in their personal contexts. Indeed, because the cooperation is free, participants function passionately and optimally without coercion.
The ‘Wisdom Game’, which means that social influence is gained through reputation, augments the motivation to participate with high quality interventions. In surveys of participants of such projects, the most frequently cited motivation is the writing of the code itself, i.e. the making of the software, and the associated ‘learning’. Because a self-unfolding logic is followed which looks for optimal feeling of flow, the participants are collaborating when they feel most energized. Open source availability of the source code and documentation means that the products can be continuously improved. Because of the social control and the reputation game, abusive behavior can be controlled and abuse of power is similarly dependent on collective approval. Eric Raymond has summarized the advantages of peer production in his seminal The Cathedral and the Bazaar:
- programmers motivated by real problems work better than salarymen who do not freely choose their area of work;
- "good programmers can write, but great programmers can rewrite", the latter is greatly accelerated by the availability of open code;
- more users can see more bugs, the number of collaborators and available brainpower is several orders of magnitude greater;
- continuous multiple corrections hasten development, while version control permits falling back on earlier versions in case of instability of the new version;
- the internet allowed global cooperation to occur.
In the sphere of immaterial production and distribution, such as for example the distribution of music, the advantages of online distribution through P2P processes are unmatched. In the sphere of material production, through essentially the contributions of knowledge workers, similarly P2P processes are more efficient than centralized hierarchical control.
Yochai Benkler, in a famous essay, ‘Coase’s Penguin’, has given a rationale for the emergence of P2P production methodologies, based on the ideas of ‘transcaction costs’. In the physical world, the cost of bringing together thousands of participants may be very high, and so it may be cheaper to have centralized firms than an open market. This is why earlier experiences with collectivized economies could not work. But in the immaterial sphere used for the production of informational goods, the transaction goods are near-zero and therefore, open source production methods are cheaper and more efficient. The example of Thinkcycle, where open source methods are used for a large number of projects, such as fighting cholera, show a wide applicability of the method. Open source methods have already been applied with a certain success in the biotechnogical field and is being proposed as an alternative in an increasing number of new areas. An interesting twist on the transcaction cost theory of Yochai Benkler is given by Clay Shirky, who explains the role of 'mental transaction costs' in the 'economy of attention', which to a large degree, explains the phenomenom of 'gratuity' in internet publishing, and why payment schemes, including micropayment, are so ineffective.
Aaron Krowne, writing for Free Software magazine, has proposed a set of laws to explain the higher efficiency of CBPP (= Commons-based peer production) models:
(Law 1.) When positive contributions exceed negative contributions by a sufficient factor in a CBPP project, the project will be successful.
This means that for every contributor that can ‘mess things up’, there have to be at least 10 others who can correct these mistakes. But in most projects the ratio is 1 to 100 or 1 to 1000, so that quality can be maintained and improved over time.
(Law 2.) Cohesion quality is the quality of the presentation of the concepts in a collaborative component (such as an encyclopedia entry). Assuming the success criterion of Law 1 is met, cohesion quality of a component will overall rise. However, it may temporarily decline. The declines are by small amounts and the rises are by large amounts.
Individual contributions which may be useful by themselves but diminish the overall balance of the project, will always be discovered, so that decline can only be temporary.
(Corollary.) Laws 1 and 2 explain why cohesion quality of the entire collection (or project) increases over time: the uncoordinated temporary declines in cohesion quality cancel out with small rises in other components, and the less frequent jumps in cohesion quality accumulate to nudge the bulk average upwards. This is without even taking into account coverage quality, which counts any conceptual addition as positive, regardless of the elegance of its integration.
Krowne has also done useful work to define the authority models at work in such projects. The models define access and the workflow, and whether there is any quality control. The free-form model, which Wikipedia employs, allows anyone to edit any entry at any time. But in the owner-centric model, entries can only be modified with the permission of a specific ‘owner’ who has to defend the integrity of his module. He concludes that “These two models have different assumptions and effects. The free-form model connotes more of a sense that all users are on the “same level,” and that expertise will be universally recognized and deferred to. As a result, the creator of an entry is spared the trouble of reviewing every change before it is integrated, as well as the need to perform the integration. By contrast, the owner-centric authority model assumes the owner is the de facto expert in the topic at hand, above all others, and all others must defer to them. Because of this arrangement, the owner must review all modification proposals, and take the time to integrate the good ones. However, no non-expert will ever be allowed to “damage” an entry, and therefore resorting to administrative powers is vanishingly rare.” The owner-centric model is better for quality, but takes more time, while the free-form model increases scope of coverage and is very fast. The choice between the two models can of course be a contentious issue. In the case of the Wikipedia, the adherents of the owner-centric model, active in the pre-Wikipedia "Nupedia" model, lost out, and presumably the success of Wikipedia has proven them wrong, since the latter totally open process has been proven a success. Similar conflicts are reported in many other projects. Collaborative projects are no utopian scheme were everything is better, but subject to intense human conflict as well. A general problem still associated with FLOSS software is their lack of user-friendlyness, they are often made from the biases of a development community, and may have less incentive than corporate entities to make them customer-friendly, which is why a niche has been created for service companies such as Red Hat.
Another important aspect of FLOSS projects is how they handle 'equipotentiality'. While formal degrees have been abandoned, and open participation is in principle encouraged, most projects will over time produce a number of rules in their selection. The important aspect is that these rules are generated within the community itself, though mostly in the early phases. After a while, they tend to consolidate and they are a given for the new participants who come later.
Crucial to the success of many collaborative projects is their implementation of the reputation schemes. They differ from previous reputation-based systems, such as academic peer review, because the open process of participation (equipotentiality) precludes a systematic strengthening of reputation so that it could become a factor of conservatism (as it is in science and its dependence on dominant paradigms) and power. In the better P2P systems, reputation is time-sensitive on the degree of recent participation and the possibility of forking and of downgrading reputation grades, introduce an aspect of community control, flexibility and dynamism. See in particular the endnote on this topic, outlining the example of the NoLogo site. Reputation-based schemes are crucial because cooperation is based on trust, and they offer a collaborative scheme to indicate those who are the best contibrutors to the common value, while motivating everybody to use the more cooperative, and less the more baser sides of human nature.
3.2.B How far can peer production be extended?
Given that open source is predicated on abundance, how far can it be extended into the material economy, and leave its confinement in the field of pure immaterial production, such as software? One of the great advocates of peer production, Yochai Benkler, who has focused his explanations of its success to the dramatic lowering of transactions costs, squarely places peer production within the limits of immaterial production, but doesn’t see an expansion beyond that, explicitly stating that it will not endanger capitalist markets. I take a much more expansive view, because of the ‘transcending’ factors that I have mentioned repeatedly, and because I believe that human intentionality in favour of participation, will pressure social structures into an expansion of the sphere of peer production. But I do agree that there are limits to this expansion. How can we view such a possibility?
If peer to peer is predicated on abundance, we have to know what it is that is abundant, and what is it that is scarce. Information and knowledge are immaterial goods with zero reproduction costs and they are non-rival goods: if I take your sandwich, you have nothing to eat, it is a rival good, but if I copy a song from your computer to mine, we both have the song. This is why we have to resist the ‘Second Enclosure’ movement, which aims to create artificial scarcity in knowledge goods, while the open source movement and the GPL license creates the opposite: a guaranteed abundant knowledge commons. Many natural raw materials are rival goods, or they are rival ‘generationally’, i.e. they take time to regrow if you take from them. But our productive capacities have become abundant. It is here that we have to take a good look at the current ‘protocol’ of our societies: money for example, to abundantly available for speculative purposes, is too scarce where it is needed. In fact, money is kept artificially scarce. Financial capital is scarce because it is concentrated in too few hands. It is important to see that such scarcity is not objective, but contingent to the present social, political and economic organization of society. In other words, it doesn’t have to be that way. In fact, there are various initiatives aimed at creating the possibility of a non-scarcity based monetary system, based on either general reform of the ‘protocol’ of money, or on bottom-up systems of complementary currency systems. The most radical proposals involve P2P-based ‘open money’ systems. One of the historical precedents showing that such reform is not utopian is the existence of ‘brakteaten’ money in Europe between the 12th and 15th centuries, a period of sustained growth within a system on non-accumulable money .
So, let us recast the question with those distinctions in mind: how far can peer production be extended to the sphere of ‘material production’.
The first important aspect of material production is the immaterial design phase. The whole process of design is immaterial and by definition in the sphere of abundance. Making a car today is highly, essentially dependent on the immaterial factors such as design, cooperation of dispersed international teams, marketing and communication. After that, the production of the cars through standardized parts in outsourced production companies, is – despite the capital requirement – more of an epiphenomenon. It is therefore not extremely difficult to expect an extension of OS production models, at least in the design and conception phase of even material production. We can envisage a future form of society, as described in the GPL (General Public License) Society scenario of Oekonux, where the intellectual production and design of any material product, is done through P2P processes.
The second important aspect of material production is the high capital cost of such physical production. At present, with a “scarce money" system and the concentration of financial resources, this severely restricts the expansion of P2P modes. However, if we would succeed in created distributed forms of capital, in the context of current financial abundance, it is likely that peer production would expand significantly. More pragmatically, and already ‘realisable’ at present, one could imagine the extension of models such as Zopa, a distributed bank where lenders and borrowers can pool resources. In such user-capitalized models, peer groups would more easily find access to needed capital resources. This is not an utopian scheme, as there are an increasing number of examples. The new generation of viral meshworks, are 'user-constituted' or 'user-capitalized'. Skype did not built an infrastructure, instead it is the spare capacity of the user's computers that creates the network.
An important aspect of any transitional period between the present limits on peer production and a possible future expansion, would be the introduction of the basic income. A basic income would significantly increase the freedom of producers to choose periodically for more intensive involvement in peer production processes. Capitalism is based on dependency: it separated producers from productive resources, so that producers have to sell their labor in exchange for a salary. A basic income, i.e. an income divorced from any linkage to work, and given to every citizen, would be a crucial means to diminish such dependence, and to free many more producers to choose for peer production models. There are many justifications and critiques of the basic income scheme, which we will discuss later, but peer production gives a new an added rationale to it: as a means to fund the more efficient peer production and to create an enormous extension of use value creation in society.
Finally, the state might consider writing out competitive bids for crucial technologies, where non-corporate entities could participate, at least for the design phase. If the state were a neutral, or even commons-friendly institution, it would not systematically favour corporate welfare or state ownership, but would also fund, where appropriate and more productive, peer production modes.
In the above paragraphs, we looked at peer production in its most complete definition: free cooperating producers working on common projects that are freely available to all. Let’s now look to the more limited partial aspects of peer to peer, such as the cooperation enabled by its technological infrastructure and software tools. As such practices brings down transactional costs, we can see how it can enable an extension of gift economy practices (such as Local Exchange Trading Networks), or what Yochai Benkler calls 'the Sharing Economy' and involves the sharing of physical assets such as cars (car pooling in the U.S. got a great boost from the web, for example). It will also enable many sites that bring together supply and demand, whether it is organized by for-profit companies, or by autonomous collectives. As a form of management, open sources methodologies are being taken up 'inside' companies, especially those joining the Open Source bandwagon, such as IBM (at least their own contributions to OS projects have to be managed in a similar fashion). Given the success and quality deliverance of many FLOSS projects, companies will look at how to emulate such processes in their own environment.
In any case, there are now a great variety of areas, where open source modeled methodologies are being used, a case in point being Thinkcycle, "a Web-based industrial-design project that brings together engineers, designers, academics, and professionals from a variety of disciplines". Similar projects are CollabNet and Innocentive.
In conclusion: we have seen how peer production is entirely appropriate for non-rival and abundant knowledge production; that cooperative peer modes of working are being taken up in the traditional for-profit economy; and that forms of sharing are expanding wherever a context of abundant and distributed capital can be achieved. We have also examined the theoretical arguments of why peer production could be expanded even further, given a number of reforms in the political economy. But peer production is not a cure-all and will continue to co-exist with other modes of production. In our opinion, it should co-exist within the context of a reformed market, an expansion of reciprocity-based gift economy practices, and a state form that has integrated new forms of peer governance and multistakeholdership.
The continued overuse of biosphere resources, and it seems we annually are consuming 20% than nature's ability to regenerate them, leads to a likely scenario of depletion and scarcity. At some point, it is likely that we will have to switch from a growth economy model, to a 'throughput economy' model, i.e. the steady-state economics described by Herman Daly, where output will not exceed input. Such a no-growth model is incompatible with contemporary capitalism, but might be compatible with 'natural capitalism' models, or gift economy models. Markets by themselves are not predicated on endless growth, only capitalism is.
Even within the sphere of abundant information and knowledge, continued expansion of P2P is not guaranteed. As McKenzie Wark (Wark, 2004) explains, information might be abundant, but in order for it to be accessed and distributed, we need vectors, i.e. the means of production and distribution of information. And these are not in the hands of the producers themselves, but in the hands of a vectoral class. Use value cannot be transformed into exchange value, without their intervention. At the same time, through intellectual property laws, this vectoral class is in the process of trying to make information scarce. For Wark, the key issue is the property form, as it is the property form, and nothing else, which renders resources scarce. However, the natural abundance of information, the peer to peer nature of vectors such as the internet, makes this a particularly hard task for the vectoral class. Unlike the working class in industrial capitalism, knowledge workers can resist and create to numerous interstices, which is where true P2P is thriving. Their natural task is to extend free access to information, to have a commons of vectoral resources; while the natural task of the vectoral class, is to control the vectors, and change the information commons into tightly controlled properties. But at the same time, the vectoral class needs the knowledge workers (or the hacker class, as McKenzie Wark puts it), to produce innovation, and in the present regime, in many cases, the knowledge workers need the vectors to distribute its work. In our own related hypothesis of the emergence of a netarchical class, which enables and exploits the networks of participatory culture, i.e. the needed platforms for collaboration, a similar tension occurs, since for-profit companies will tend to want to achieve dominance and monopoly and rig the platforms in their favor.
This is the reason that relations between P2P and the for-profit model of the enterprise are highly contradictory and rife with tensions. P2P-inspired project teams have to co-exist with a hierarchical framework that seeks only to serve the profit of the shareholders. The authority model of a corporation is essentially a top-down hierarchical even ‘feudal’ model. Since traditionally corporate power was a scarce resource predicated on information control, very few companies are ready to actually implement coherent P2P models and their inherent demand for an information sharing culture, as it threatens the core power structure. By their own nature, companies seek to exploit external resources, at the lowest possible cost, and seek to dump waste products to the environment. They seek to give the lowest possible socially-accepted wage, which is sufficient to attract workers. Mitigating factors are the demands and regulations of the democratic polity, and today in particular the demands of the political consumer; and the strength and scarcity of labor. But essentially, the corporation will be reactive to these demands, not pro-active.
P2P is, as we will argue throughout the different sections of this book, always both ‘within’ and ‘beyond’ the present system. It is within because it is the condition for the functioning of the present system of ‘cognitive capitalism’. But P2P, if it follows its own logic, demands to be extended to the full sphere of material and social life, and demands its transformation from a scarce resource, predicated on private property to an abundant resource. Therefore, ultimately, the answer to the question: can P2P be extended to the material sphere, should have the following reply: only if the material sphere is liberated from its connection to scarce capital, and instead starts functioning on the predicate of over-abundant and non-mediated labor, will it effectively function outside the immaterial sphere. Thus P2P points to the eventual overcoming of the present system of political economy.
3.3 Placing the P2P Era in an evolutionary framework
Is it possible to 'historicise' the emergence of peer to peer, to place it into an examination of different social formations? This is what we attempt to do in the following sections.
3.3.A The evolution of cooperation: from neutrality to synergetics
If we take a wider view of economic evolution, with the breakdown of the tribal ‘gift economy’, which operated in a context of abundance (this counter-intuitive analysis is well explained by anthropologists such as Marshall Sahlins (Sahlins, 1972), who showed that tribal peoples only needed to work a few hours per day for their physical survival needs), we can see that premodern imperial and feudal forms of human cooperation where based on the use of force (the transition from egalitarian Neolithic villages to class-based Sumerian cities such as Akkad took place in the 4th millennium B.C.). Using Edward Haskell’s triune categorization of human cooperation (adversarial, neutral, synergetic, Haskell, 1972), it was a win-lose game, which inevitably led to the monopolization of power (either in land and military forces in precapitalist formations, or in the commercial sphere, as in capitalism). Tribute was exacted from losers in a battle (or freely offered by the weak seeking protection), labour and produce from slaves and serfs. In forced, adversarial cooperation, in this win-lose game, cooperative surplus is less than optimal, it is in fact negative: 1 + 1 is less than two. Productivity and motivation are low.
In capitalist society, neutral cooperation is introduced. As we said above, in theory, free workers exchange their labour for a fair salary and products for a ‘fair’ amount of money. In neutral cooperation, the result of the cooperation is average. Participants give just their money’s worth. Neither participant in a neutral exchange gets better, 1 plus 1 equals 2. We can interpret this negatively or positively. Negatively, capitalist theory is rarely matched in practice, where fair exchange is always predicated on monopolization and power relationships. The situation is therefore much darker, more adversarial and less neutral, than the theory would suggest. Nevertheless, compared to the earlier feudal models, marked by constant warfare, the monopoly of violence exercised by the capitalist state model, limits internal armed conflicts, and adversarial relationships are relegated to the sphere of commerce. The system has proven very productive, and coupled with the distributive nature of the welfare state which was imposed on it, has dramatically expanded living standards in certain areas of the world. Seen in the most positive light, a positive feedback loop may be created in which both partners feel they are winning, thus it can sometimes be seen as a win-win model. But what it cannot do, due to its inherent competitive nature, is transform itself into a win-win-win model (or in the formulation of Timothy Wilken of synearth.net, a win-win-win-win model, with the biosphere as fourth partner). A capitalist relationship cannot freely care for the wider environment, only forced to care. (This is the rationale for regulation, as self-regulation generally proves even more unsatisfactory in terms of the general interest of the wider public and the survival of the biosphere.)
Here peer to peer can be again defined as a clear evolutionary breakthrough. It is based on free cooperation. Parties to the process all get better from it: 1 plus 1 gives a lot more than 2. By definition, peer to peer processes are mobilized for common projects that are of greater use value to the wider community (since monetized exchange value falls away). True and authentic P2P therefore logically transforms into a win-win-win model, whereby not only the parties gain, but the wider community and social field as well. It is, in Edward Haskell’s definition, a true synergetic cooperation. It is very important to see the ‘energetic’ effects of these different forms of cooperation, as I indicated above:
- forced cooperation yields very low quality contributions;
- the neutral cooperation format of the marketplace generates average quality contributions;
- but freely given synergistic cooperation generates passion.
Participants are automatically drawn to what they do best, at the moments at which they are most passionate and energetic about it. This is one of the fundamental reasons of the superior quality which is eventually, over time, created through open source projects.
Arthur Coulter, author of a book on synergetics (Coulter, 1976), adds a further twist explaining the superiority of P2P. He adds, to the objective definition of Haskell, the subjective definition of ‘rapport’ based on the attitudes of the participants. Rapport is the state of a persons who are in full agreement, and is determined by synergy, empathy, and communication. Synergy refers to the interactions that promote the goals and efforts of the participants; empathy to the mutual understanding of the goals; and communication to the effective interchange of the data. His “Principle of Equivalence” states that the flow of S + E + C are optimal when they have equivalent status to each other. If we distinguish Acting Superior, Acting Inferior on one axis and Acting Supportively and Acting with Hostility on another axis, then the optimal flow arises when one treats the other as ‘somewhat superior’ and with ‘some support’. Thus an egalitarian-supportive attitude is congenial to the success of P2P.
Above we have focused on the means of cooperation, but another important aspect is the 'scope' of cooperation, or the amount or 'volume' of what can be shared, in both relative and absolute terms.
This is how Kim Veltman, a Dutch academic, echoed by evolutionary psychologist John Steward puts it:
“Major advances in civilization typically entail a change in medium, which increases greatly the scope of what can be shared. Havelock noted that the shift from oral to written culture entailed a dramatic increase in the amount of knowledge shared and led to a re-organization of knowledge. McLuhan and Giesecke explored what happened when Gutenberg introduced print culture in Europe. The development of printing went hand in hand with the rise of early modern science. In the sixteenth century, the rise of vernacular printing helped spread new knowledge. From the mid-seventeenth century onwards this again increased as learned correspondence became the basis for a new category of learned journals (Journal des savants, Journal of the Royal Society, Göttinger Gelehrten Anzeiger etc.), whence expressions such as the "world of letters". The advent of internet marks a radical increase in this trend towards sharing.” (http://erste.oekonux-konferenz.de/dokumentation/texte/veltman.html)
In a similar vein, a French philosopher, Jean-Louis Sagot-Duvauroux (Sagot-Duvauroux, 1995), who wrote the book, “Pour la Gratuite”, stresses that many spheres of life are not dominated by state or capital, that these are all based on free and equal exchange, and that the extension of these spheres is synonymous with civilisation-building. The very fact that the cooperation takes place in the sphere of free and non-monetary exchange of the Information Commons, is a sign of civilisational advance. By contrast, the 'monetarisation of everything' (commodification) that is a hallmark of cognitive capitalism, is a sign of de-civilisation.
Recent developments concerning the participatory culture on the internet have stimulated the discipline of cooperation studies, which study how to promote human cooperation. For example, they are trying to determine the maximum number to obtain efficient non-hierarchically cooperating groups, beyond which centralization and hierarchy sets in.
Nature of cooperation
Nature of Game
Quality of Cooperation
Low, 1+1 < 2
Average, 1+1 = 2
Non Zero-sum: win-win-win
High, 1=1 > 2
3.3.B The Evolution of Collective Intelligence
Related to the above evolution of cooperation is the concept of collective intelligence, which concerns any knowledge of the collective, which goes beyond or transcends the knowledge of its parts. Collective Intelligence is the process whereby groups take charge of their challenges and future evolution, by using the resources of all its members in such a way that a new level emerges which has added qualities.
Jean-Francois Noubel in an online book-in-progress at http://www.thetransitioner.org/ic outlines three stages, arguing that we are in a transition to a fourth. The following is a synthesis of his work.
The first stage is the 'original collective intelligence', which can only exist in small groups, and historically has been typified by the human organisation in the tribal era. Seven characteristics define this stage:
- an emerging whole that goes beyond its parts
- the existence of a 'holoptic' space, which allows the participants to access both horizontal knowledge, of what others are doing, and access to vertical knowledge, i.e. about the emerging totality; to have collective intelligence, all participants must have this access, from their particular angle
- a social contract with explicit and implicit social rules about the forms of exchange, common purpose, etc.
- a polymorph architecture which allows for ever-changing configurations
- a shared 'linked object', which needs to be clear. This can be an object of attraction (the ball in sports), of repulsion (a common enemy), of a created object (future goal, artistic expression).
- the existence of a learning organisation, where both individuals and the collective can learn from the experience of the parts
- a gift economy, in the sense that there is dynamic of giving in exchange for participating in the benefits of the commons
This original stage had two limits: the number of participants, and, the need for spatial proximity.
The second stage is the stage of pyramidal intelligence. As soon as a certain level of complexity is reached, it will transcend the limits in numbers as well as the spatial limits. Cooperation takes on hierarchical formats, with the following characteristics:
- division of labour, in which the constituent parts become interchangeable; based on specialized access to information and panoptism, i.e. only a few have centralized access to the totality
- authority organizes a asymmetrical information transfer, based on command and control
- regulated access to scarce resources, usually through a monetary system
- the existence of norms and standards, often privatized, that allow knowledge to be objectified.
Pyramidal intelligence exists to obtain 'economies of scale' through repetitive processes that can add value to an undifferentiated mass of raw material. To see what kind of intelligence predominates in an organisation, adds Noubel, look at how it produces. If it produces mass products, then, despite eventual token usage of peer to peer processes, it will essentially be based an hierarchy-based pyramidal intelligence.
The third form of collective intelligence is swarming. It exists where 'simple individuals' cooperate in a global project without holoptism, i.e. collective intelligence emerges from their simple interactions. The individual agents are not aware of the whole. This is the mode of organisation of social insects, and of market-based societies. The problem is that in the insect world, individuals are expendable for the good of the system, while this is unacceptable in the human world because it negates the full richness of persons. This means that the contemporary enthusiasm for swarm intelligence has to be looked at with caution. It is not a peer to peer process, because its lacks the quality of holoptism, the ability of any part to know the whole. Instead, swarming is characterized by 'stigmergy', i.e. 'environmental mechanisms used to coordinate activities of independent actors'.
Thus, a fourth level of collective intelligence is emerging, which Noubel calls 'global collective intelligence'. Compared to original CI is has the following added characteristics:
- a 'sufficient' money as opposed to a scarce money (see The Transitioner.org/ic site for more details)
- open standards that maximize interoperability
- an information system to regulate symbolic exchange
- a permanent connection with cyberspace
- personal development to acquire the capabilities for such cooperation
In this new global collective intelligence, the original limits in numbers and spatial proximity are transcended by creating linkages through cyberspace. In this context, we can see why technological developments are an integral part of this evolution, as it enables this form of networking. What cyberspace does is to create the possibility of groups cooperating despite physical distance, and to coordinate these groups in a network. An important aspect of the new cyberspace-enabled collective intelligence will be the increasingly symbiotic relationship between the countless human minds (one billion at present) and the huge networked intelligent machine we are creating. This noospheric networked intelligence is not an alien construction imposed on us, but something we are collectively creating through our sharing and participation.
David Weinberger has recently summarized the history of knowledge exchange for the Release 1.0. newsletter, showing how digitilisation has freed categorization from the shackles it had in the physical world. He notes how humanity first started to separate things (shoes in shoe boxes, etc..), then, we the advent of the alphabet, it started to separate the information about things, from the things itself, putting the books in library shelves, and the data in card catalogs. The information would inevitably be classified in a hierarchy of knowledge, a tree structure. One way to know the world, one way to access knowledge. In the 1930's, an Indian librarian named Shiyalin Ranganathan, 'decentralised' knowledge categorization. An object has different facets, and the user can determine which facet is the most important for him/her. Thus, the catalog will organize the information hierarchically, but flexible, starting with one facet, then another, following the specifications of the user, as long as the programmer has prefigured these choices. On the web is now emerging a bottom-up approach, which does not necessitate any prior hierarchical categorization. Users will add tags, and different users or user groups will use different groups of tags, each reflecting their personal or group ontologies, thereby illuminating different aspects of the object. This peer to peer categorization methods are called folknomies. In the sphere of abundance that is the internet, it is nearly impossible to continue using hierarchical and well-designed metadata systems, due to the sheer volume of data and the large numbers of users who would have to be disciplined, so bottom-up tagging makes a lot of sense. In the peer to peer era, knowledge is liberated from preconceived and forced categorizations. Many authors have examined how our categories of classification are also instruments of power, and noted how different social formations overturned previous forms of categorizing the world. With the P2P classification schemes we see for the first time a recognition of multiperspectival worldviews. Knowledge is a distributed network, following a peer to peer logic. Notice how computers themselves have followed a logic from linear calculation to parallel and distributed computing, and from the mainframe/dumb terminal (centralization), via the client server (decentralization) model, to the internet filesharing ('the network is the computer') model. In computer programming the shift has been from linear ad procedural software production methods, to object-oriented programming, conceived as autonomous objects.
What is certain is that the emergence of networked media involves a new epistemology, a new way of relating to the truth. In one of the articles in a collection on ‘Subjectivation du Net’ in the journal Multitudes, issue 21, Jean-Louis Weisberg problematises the new epistemology. The explosion in the number of interconnected humans and machines, who are individually and collectively posting information and sharing knowledge , and linking multiple media formats through hyperlinks and RSS feeds, prefigures entirely new ways of knowing and learning. It is linked to the growing distrust of the older forms of mass media and even representational democracy, that we are witnessing.
The objective dimension of truth, implied in mass media, where specialized reporters verify facts to establish one narrative of truth, is making place for a truth that is emerging out of continuous intersubjective confrontation. Network users experiment, modelise and communally discuss events, acquiring the possibility of a much greater intimacy with the discussed event or process, a view of reality which is enriched by the multiple interpretations. But what is needed is a common meta-framework, so that the space is opened up for families of interpretations to compete.
Another important related shift that is occurring relates to how we are learning and visions about the learning process. Behaviorism, cognitivism and constructivism, the three main learning theories, locate learning within the person, even as the latter admits that learning is socially constructed. Emerging connectivist learning theory on the other hand, acknowledges that learning can take place 'outside' the individual, through his connections. Since reality is ever shifting and changing, the individual ambition to know everything is a lost cause, paucity of knowledge has been replaced by an abundance of knowledge. Crucial skills are know the ability to know 'where', in our field of connections, the actionable knowledge is located, to be able to evaluate 'what' has to be learned in a context of abundance, and to negotiate and integrate a variety of opinions on any given subject.
Finally, theories about how individuals learn, still the focus of connectivist theory, must be coupled with a study of the new peer to peer knowledge dynamics, especially of the communal validation of truth which occurs within peer groups, and which replaces institutional mediation.
Type of Collective
Top – down planning
Power type & distribution
Mode of regulation
Static (printed rules)
Dynamic (*Galloway: 'Protocol')
Material goods & knowledge
3.3.C Beyond Formalization, Institutionalization, Commodification
Observation of commons-based peer production and knowledge exchange, unveils a further number of important elements, which can be added to our earlier definition and has to be added to the characteristic of holoptism just discussed in 3.3.B.
In premodern societies, knowledge is ‘guarded’, it is part of what constitutes power. Guilds are based on secrets, the Church does not translate the Bible, and it guards its monopoly of interpretation. Knowledge is obtained through imitation and initiation in closed circles.
With the advent of modernity, and let’s think about Diderot’s project of the Encyclopedia as an example, knowledge is from now on regarded as a public resource which should flow freely. But at the same time, modernity, as described by Foucault in particular, starts a process of regulating the flow of knowledge through a series of formal rules, which aim to distinguish valid knowledge from invalid. The academic peer review method, the setting up of universities which regulate discourse, the birth of professional bodies as guardians of expertise, the scientific method, are but a few of such regulations. An intellectual property rights regime also regulates the legitimate use one can make of such knowledge, and which is responsible for a re-privatization of knowledge. If original copyright served to stimulate creation by balancing the rights of authors and the public, the recent strengthening of intellectual property rights can be more properly understood as an attempt at ‘enclosure’ of the information commons, which has to serve to create monopolies based on rent obtained through licenses. Thus at the end of modernity, in a similar process to what we described in the field of work culture, there is an exacerbation of the most negative aspects of the privatization of knowledge: IP legislation is incredibly tightened, information sharing becomes punishable, the market invades the public sphere of universities and academic peer review and the scientific commons are being severely damaged.
Again, peer to peer appears as a radical shift. In the new emergent practices of knowledge exchange, equipotency is assumed from the start. There are no formal rules to prohibit anyone from participation (unlike academic peer review, where formal degrees are required.) Validation is a communal intersubjective process. It often takes place through a process akin to swarming, whereby large number of participants will tug at the mistakes in a piece of software or text, the so-called 'piranha effect', and so perfect it better than an individual genius could. Many examples of this kind are described in the book 'The Wisdom of Crowds', by James Surowiecki. Though there are constraints in this process, depending on the type of governance chosen by various P2P projects, what stands out compared to previous modes of production is the self-selection aspect. Production is granular and modular, and only the individuals themselves know exactly if their exact mix of expertise fits the problem at hand. We have autonomous selection instead of heteronomous selection.
If there are formal rules, they have to be accepted by the community, and they are ad hoc for particular projects. In the Slashdot online publishing system which serves the open source community, a large group of editors combs through the postings, in other systems every article is rated creating a hierarchy of interest which pushes the lesser-rated articles down the list. As we explained above, in the context of knowledge classification, there is a move away from institutional categorization using hierarchical trees of knowledge, such as the bibliographic formats (Dewey, UDC, etc..), to informal communal ‘tagging’, what some people have termed folksonomies. In blogging, news and commentary are democratized and open to any participant, and it is the reputation of trustworthiness, acquired over time, by the individual in question, which will lead to the viral diffusion of particular ‘memes’. Power and influence are determined by the quality of the contribution, and have to be accepted and constantly renewed by the community of participants. All this can be termed the de-formalization of knowledge.
A second important aspect is de-institutionalization. In premodernity, knowledge is transmitted through tradition, through initiation by experienced masters to those who are validated to participate in the chain mostly through birth. In modernity, as we said, validation and the legitimation of knowledge is processed through institutions. It is assumed that the autonomous individual needs socialization, ‘disciplining’, through such institutions. Knowledge has to be mediated. Thus, whether a news item is trustworthy is determined largely by its source, say the Wall Street Journal, or the Encyclopedia Brittanica, who are supposed to have formal methodologies and expertise. P2P processes are de-institutionalized, in the sense that it is the collective itself which validates the knowledge.
Please note my semantic difficulty here. Indeed, it can be argued that P2P is just another form of institution, another institutional framework, in the sense of a self-perpetuating organizational format. And that would be correct: P2P processes are not structureless, but most often flexible structures that follow internally generated rules. In previous social forms, institutions got detached from the functions and objectives they had to play, became 'autonomous'. In turn because of the class structure of society, and the need to maintain domination, and because of 'bureaucratisation' and self-interest of the institutional leaderships, those institutions turn 'against society' and even against their own functions and objectives. Such institutions become a factor of alienation. It is this type of institutionalization that is potentially vercome by P2P processes. The mediating layer between participation and the result of that participation, is much thinner, dependent on protocol rather controlled by hierarchy.
A good example of P2P principles at work can be found in the complex of solutions instituted by the University of Openness. UO is a set of free-form ‘universities’, where anyone who wants to learn or to share his expertise can form teams with the explicit purpose of collective learning. There are no entry exams and no final exams. The constitution of teams is not determined by any prior disciplinary categorization. The library of UO is distributed, i.e. all participating individuals can contribute their own books to a collective distributed library. The categorization of the books is explicitely ‘anti-systemic’, i.e. any individual can build his own personal ontologies of information, and semantic web principles are set to work to uncover similarities between the various categorizations.
All this prefigures a profound shift in our epistemologies. In modernity, with the subject-object dichotomy, the autonomous individual is supposed to gaze objectively at the external world, and to use formalized methodologies, which will be intersubjectively verified through academic peer review. Post-modernity has caused strong doubts about this scenario. The individual is no longer considered autonomous, but always-already part of various fields, of power, of psychic forces, of social relations, molded by ideologies, etc.. Rather than in need of socialization, the presumption of modernity, he is seen to be in need of individuation. But he is no longer an ‘indivisible atom’, but rather a singularity, a unique and ever-evolving composite. His gaze cannot be truly objective, but is always partial, as part of a system can never comprehend the system as a whole. The individual has a single set of perspectives on things reflecting his own history and limitations. Truth can therefore only be apprehended collectively by combining a multiplicity of other perspectives, from other singularities, other unique points of integration, which are put in ‘common’. It is this profound change in epistemologies which P2P-based knowledge exchange reflects.
A third important aspect of P2P is the process of de-commodification. In traditional societes, commodification, and ‘market pricing’ was only a relative phenomenom. Economic exchange depended on a set of mutual obligations, and even were monetary equivalents were used, the price rarely reflected an open market. It is only with industrial capitalism that the core of the economic exchanges started to be determined by market pricing, and both products and labour became commodities. But still, there was a public culture and education system, and immaterial exchanges largely fell outside this system. With cognitive capitalism, the owners of information assets are no longer content to live any immaterial process outside the purview of commodification and market pricing, and there is a strong drive to ‘privatize everything’, education included, our love lives included. Any immaterial process can be resold as commodities. Thus again, in the recent era the characteristics of capitalism are exacerbated, with P2P representing the counter-reaction. With ‘commons-based peer production’ or P2P-based knowledge exchange more generally, the production does not result in commodities sold to consumers, but in use value made for users. Because of the GPL license, no copyrighted monopoly can arise. GPL products can eventually be sold, but such sale is usually only a credible alternative (since it can most often be downloaded for free), if it is associated with a service model. It is in fact mostly around such services that commercial open source companies found their model (example: Red Hat). Since the producers of commons-based products are rarely paid, their main motivation is not the exchange value for the eventually resulting commodity, but the increase in use value, their own learning and reputation. Motivation can be polyvalent, but will generally be anything but monetary.
One of the reasons of the emergence of the commodity-based economy, capitalism, is that a market is an efficient means to distribute ‘information’ about supply and demand, with the concrete price determining value as a synthesis of these various pressures. In the P2P environment we see the invention of alternative ways of determining value, through software algorithms. In search engines, value is determined by algorithms that determine pointers to documents, the more pointers, and the more value these pointers themselves have, the higher the value accorded to a document. This can be done either in a general matter, or for specialized interests, by looking at the rankings within the specific community, or even on a individual level, through collaborative filtering, by looking at what similar individuals have rated and used well. So in a similar but alternative way to the reputation-based schemes, we have a set of solutions to go beyond pricing, and beyond monetarisation, to determine value. The value that is determined in this case is of course an indication of potential use value, rather than ‘exchange value’ for the market.
The peer to format, as a new organizational model, has been the subject of some study. Below we add a graph outlining the difference between P2P, called 'Edge Organisations' in this context, and Hierarchical Organisations.
An Emergent Property
Predominant Information Flows
Vertical, coupled with chain of command
Horizontal, independent of chain of command
Post & Pull
Eclectic, Adaptable Marketplaces
Prescribed & Sequential
Dynamic and Concurrent
Individuals on the Edge
From the book "Power to the Edge" by D. Alberts & R. Hayes
For comparison purposes, see the similar description of P2P-like organizational formats by the Chaordic Commons, an organisation and network formed by Dee Hock.
3.3.D The Evolution of Temporality: towards an Integral Time
Commons-based peer production and the associated work culture, the hacker ethic, also represent a milestone in the history of temporality. A quick reminder of the history of temporal experience according to our premodern–modern–postmodern scheme will show this. Tribal people and early agricultural civilizations lived in a cyclic time, following the rhythm of nature and of religious rituals. In many aspects it was an experience of an eternal now. Ancestors and mythical creators of the civilizations were deemed not to live in a distant and remote past, but in the same temporality. If they had long-term cycles, they often came into cycles of progressive degeneration (as in the Hindu time scheme ending with the Kali Yuga, the end time of the Iron Age) that would then bring on a new cycle of cycles: the myth of eternal return. This would change with the advent of the monotheistic religions which prefigured modernity. Temporality became progressive, going from past to future, seen as a apocalyptic liberation. Modernity started viewing time in a calculating fashion, in discrete blocks which could be measured and managed, and the Judeo-Christian temporal line was transformed into the ideology of Progress. Time was essentially being spatialized. But with modernity came stress: human time became enslaved to the time of capitalist efficiency, to the time of the machines, to the cycles of commerce.
These trends find their apotheosis in our current postmodern times, where competition has become a matter of speed, where the economy becomes a 24/7 affair. We have described this state of hypercompetition, coupled with time–space condensation, and the extension of efficiency thinking to the private sphere, in our section on the hacker ethic, showing also its psychological unsustainability. Many of our contemporaries are now time-sick, imprisoned by very short term thinking, their time horizon collapsing. Another element associated with current time experience is the emergence of a collective world-time, collapsing into a single mass-lived experience through the role of the mass media. Paradigmatic was the first Gulf War incident, where millions of people where watching a missile go down on a Baghdad target.
We have often argued how current trends both exacerbate certain aspects of modernity, while at the same time counter-trends point to alternatives going beyond it. The same thing might be said about peer to peer temporality. If postmodernity brought us the supreme alienation of a permanent now collapsing other temporal necessities and experiences, infiltrating even our private time of intimacy, exhibiting a temporal imperialism, then peer to peer temporarility shows the promise of an 'integral time'.
We argued that CBPP projects offered a number of advantages such as the self-management of time. Classic industrial production described jobs in great detail, calculating every move (Taylorism) and controlled the debit of each worker (volume of production in the shortest possible time). In postmodernity, the focus is on the objectives and results, and on the deadlines in which they have to be achieved. For many workers today, their life is one of competing deadlines, and the hundreds of interruptions that stand in the way.
Cooperative CBPP projects traditionally reject such rigid schemes. While work on such projects can be fairly intense, and can be very 'fast' as well, this intensity emerges from the natural life rhythms of the collaborators. It is not imposed from the outside. It is rather the different subgroups which start to condition each other, the time spans are generated internally, more organically following the self-unfolding patters of the creative work. The human is no longer enslaved to time. There is in fact no clear connection between time spent on a project and its inherent quality, as many in the artistic world have experienced, and this model is now expanding in other productive fields, as generic knowledge work is creative as well. Whereas in modernity, say the Fordist/Taylorist paradigm, the focus is on 'quantity', and in postmodernity the focus is still on embedding qualitative concerns into the straitjacket of high-pressure objectives and deadlines, in peer to peer, the focus is more on quality. 'Work' is about transforming something into a desired use value, and the success is measured in how well the use value has been created. The process follows a individual and collective self-unfolding in which the various sub-projects condition each other, gradually coalescing into both a desired but also unforeseen outcome.
This shift in temporal experience also has political consequences, outlined by John Holloway in his Revolution without Power. Typical for modernity was that transformed Judeo-Christian underpinnings of socialist ideology had fused with the apocalyptic and utopian time sense, gave rise to the counter-time of the revolution, for the wait for a radical transformation or for the next reform. It was either the reformist time who did not change the 'system as such', or the revolutionary time which did everything for the system's destruction. In both cases there was no integration between the present now and the desired future. Integral time points to another solution. Living in the now, in the refusal of contributing to the self-destruction of our civilization, can be combined by building the alternative as a continuing process.
This is a whole new temporal experience. We call it 'integral time' because it represents a autonomous mastery of time, where the different temporal experiences (cyclical, linear, etc…) become transparent and used 'at the right time'. The time for intimacy, the time for rest and relaxation, the time for intellectual and spiritual renewal, all have their different rhythms, which can be acknowledged in CBPP projects, in a way that they cannot in the hypercompetitive for-profit world.
3.4 Placing P2P in an intersubjective typology
In my opinion, there is a profound misconception regarding peer to peer, expressed by the various authors who call it a gift economy, such as Richard Barbrook (Barbrook, 1995), or Steven Weber (Weber, 2004). But, as Stephan Merten of Oekonux.de has already argued, P2P production methods are not a gift economy based on equal sharing, but a form of communal shareholding based on participation. In a gift economy if you give something, the receiving party has to return if not the gift, then something of at least comparable value (in fact the original tribal gift economy was more about creating relationships and obligations and a means to evacuate excess, since they did not need it for their basic survival needs). In a participative system such as communal shareholding, organized around a common resource, anyone can use or contribute according to his need and inclinations.
Let me give a context to this claim by introducing the typology of intersubjective relations, as defined by anthropologist Alan Page Fiske (Fiske, 1993). There are, he says, historically and across all cultures, only four basic types of relating to one another, which form a grammar of human relationships, these are Authority Ranking, Equality Matching, Market Pricing, and Communal Shareholding. From the following description, one can deduce that P2P does not correspond to Equality Matching, which is the principle behind a gift economy, but to Communal Shareholding.
“People use just four fundamental models for organizing most aspects of sociality most of the time in all cultures. These models are Communal Sharing, Authority Ranking, Equality Matching, and Market Pricing. Communal Sharing (CS) is a relationship in which people treat some dyad or group as equivalent and undifferentiated with respect to the social domain in question. Examples are people using a commons (CS with respect to utilization of the particular resource), people intensely in love (CS with respect to their social selves), people who "ask not for whom the bell tolls, for it tolls for thee" (CS with respect to shared suffering and common well-being), or people who kill any member of an enemy group indiscriminately in retaliation for an attack (CS with respect to collective responsibility). In Authority Ranking (AR) people have asymmetric positions in a linear hierarchy in which subordinates defer, respect, and (perhaps) obey, while superiors take precedence and take pastoral responsibility for subordinates. Examples are military hierarchies (AR in decisions, control, and many other matters), ancestor worship (AR in offerings of filial piety and expectations of protection and enforcement of norms), monotheistic religious moralities (AR for the definition of right and wrong by commandments or will of God), social status systems such as class or ethnic rankings (AR with respect to social value of identities), and rankings such as sports team standings (AR with respect to prestige). AR relationships are based on perceptions of legitimate asymmetries, not coercive power; they are not inherently exploitative (although they may involve power or cause harm).
In Equality Matching relationships people keep track of the balance or difference among participants and know what would be required to restore balance. Common manifestations are turn-taking, one-person one-vote elections, equal share distributions, and vengeance based on an-eye-for-an-eye, a-tooth-for-a-tooth. Examples include sports and games (EM with respect to the rules, procedures, equipment and terrain), baby-sitting coops (EM with respect to the exchange of child care), and restitution in-kind (EM with respect to righting a wrong). Market Pricing relationships are oriented to socially meaningful ratios or rates such as prices, wages, interest, rents, tithes, or cost-benefit analyses. Money need not be the medium, and MP relationships need not be selfish, competitive, maximizing, or materialistic—any of the four models may exhibit any of these features. MP relationships are not necessarily individualistic; a family may be the CS or AR unit running a business that operates in an MP mode with respect to other enterprises. Examples are property that can be bought, sold, or treated as investment capital (land or objects as MP), marriages organized contractually or implicitly in terms of costs and benefits to the partners, prostitution (sex as MP), bureaucratic cost-effectiveness standards (resource allocation as MP), utilitarian judgments about the greatest good for the greatest number, or standards of equity in judging entitlements in proportion to contributions (two forms of morality as MP), considerations of "spending time" efficiently, and estimates of expected kill ratios (aggression as MP). “ (source: Fiske website)
From the above description, it should be clear that the tribal gift economy is a form of sharing, based on ‘equal’ parts, according to a specific criteria of ‘what it is that functions as common standard for comparison’. Thus in the tribal economy, when a clan or tribe (or the members of such) gives away its surplus, the recipient group or indvidual is forced to eventually give back, say the next year, at least as much, or they will loose relative prestige. What such a gift economy does however is create a community of obligations and reciprocity, unlike the market-based mechanisms, where 'equal is traded with equal', and every transaction stands alone.
Similarly, in the feudal social redistribution mechanism, the rich and powerful compete in the gift giving to Church or Sangha, as a matter of prestige. In this case, what they receive back is not other material gifts, but, on the one hand social prestige, and on the other hand, the immaterial benefits of 'better karma' ('merit' in S.E. Asian Buddhism), or being closer to salvation (in the form of indulgences in medieval Christianity). In the gift economy, "something" is always being exchanged.
This is not the mechanism that operates in the sphere of knowledge exchange on the internet. In open source production, filesharing, or knowledge exchange communities, I freely contribute what I can, what I want, without obligation; on the recipient side, one simply takes what one needs. It is common for any web-based project to have, let’s say, 10% active contributing members, and 90% passive lurkers. This can be an annoyance, but is never a ‘fundamental problem’, for the very reason that P2P operates in a sphere of abundance, where a tragedy of the commons, an abuse of common property, cannot occur, or at least, not in the classical sense. In the concept of Tragedy of the Commons, communal holdings are depleted and abused, because they belong to no one and also because physical goods are limited 'rival' goods, they can be taken away. The conflict is between the collective interest for preserving the Commons, and the individual incentive to abuse it for one's own personal benefit. We should note how this theory is based on a 'unregulated' Commons, leaving it without defense against individual predation, and it is therefore misleading as a general theory of the Commons.
But in the Information Commons created through P2P processes, the value of the collective knowledge base is not diminished by use, but on the contrary enhanced by it: it is governed, in John Frow's words, by a Comedy of the Commons, or using a similar metaphor, producing a Cornucopia of the Commons. This is so because of the network effect, which makes resources more valuable the more they are used. Think about the example of the fax, which was relatively useless until a critical mass of users was reached. And the goods are immaterial, and thus 'non-rival', which means that they can be replicated or replenished without cost, they cannot be monopolized (unless by law and licenses, hence the intellectual property wars). It is when these 'network externalities' are at play, that the Commons form seems to be the most appropriate, functioning better than with individual private property. From the point of view of the individual users themselves, who act in their own interest, what P2P systems do is to mobilize what the economists call ‘positive externalities’. These are benefits generated by user resources or behaviour, which are lost if they are not used by others or a collective, but do not generate negative side effect for the user either.
What the better P2P systems do however, is to make participation, such sharing of positive externalities, ‘automatic’, so that even passive use becomes useful participation for the system as a whole. Think of how BitTorrent makes any user who downloads a resource, in his/her turn a resource for others to use, unbeknownst and independent of any conscious action of the user. Say I have a team working on a software project, and it creates a special email system to communicate around development issues. This communication is considered a common resource and archived, and thus, without any conscious effort of the participating members, automatically augments the common resource base. One of the key elements in the success of P2P projects, and the key to overcoming any ‘free rider’ problem, is therefore to develop technologies of “Participation Capture” (see the endnote on how my concept differs from both panoptical surveillance and 'sousveillance').
There are of course new social problems arising with P2P, some of which we do not know yet, and some, already occurring, which are related to the quality of social behaviour, but interestingly, these problems are tackled through the collective as well. For example, Clay Shirky, one of the most astute observers of the new social networking sphere, has observed how 'flaming', which can be a serious problem in mailing lists, has been seriously attenuated by blogs and wiki's, through a focus on 'social design'. Shirky shows how the design, the protocols in Galloway's view, have to move away from a focus on the personal user facing a box, towards a recognition of the social usage of these technologies.
The social logic of information and resource sharing is a cultural reversal vis-a-vis the information retention logic of hierarchical social systems. Participation is assumed, and non-participation has to be justified. Information sharing, the public good status of your information, is assumed, and it is secrecy which has to be justified.
So what people are doing in P2P systems, is participating, and doing so they are creating a ‘commons’. Unlike traditional Communal Shareholding, which starts from already existing physical resources, in peer to peer, the knowledge commons is created through participation, and does not exist ‘ex ante’.
All of the above argumentation leads to the conclusion that P2P is not a Equality Matching model (and not a Market Pricing Model), but Communal Shareholding. These arguments have an ideological subtext. The reason I am stressing this analysis is to counter neoliberal dogma that humans are only motivated by greed. Saying that P2P is a gift economy requires a strict accounting of the exchange. Or saying that such participation is motivated by the quest for reputation only, or that it is a game to obtain attention, corresponds to this same ideology which cannot accept that humans also have a ‘cooperative’ nature, and that it can thrive in the right conditions. Our aim is not to deny that humans have these characteristics, but only to point out that cooperation and altruism are just as constitutive of who we are, and given the right institutional conditions and moral development, the latter rather than the former can be enhanced. There is no need to 'reduce' the characteristics in a one-sided manner, but rather to recognize the subtle richness and combinations of who we are, and to develop the right kind of institutions and knowledge (such as the new field of cooperation studies) to strengthen the latter's potentialities.
Though the early traditional gift economy was spiritually motivated and experienced as a set of obligations, which created reciprocity and relationships, involving honour and allegiance (as explained by Marcel Mauss in the Gift), since gifts were nevertheless made in a context of obligatory return, it involved a kind of thinking that is quite different from the gratuity that is characteristic of P2P: giving to a P2P project is explicitly not done for an 'certain' and individual return of the gift, but for the use value, for the learning involved, for reputational benefits perhaps, but only indirectly.
The above does not mean that P2P is unrelated to the contemporary revival of gift economy applications. Local Exchange Trading Systems, which are springing up in many places, are forms of Equality Matching, and, from an 'egalitarian' point of view, they may be preferable to Market Pricing mechanisms, since for them, any hour of labour has an equal value. Both P2P as ‘Communal Shareholding’, and contemporary expressions of the gift economy ethos, are part of the same ‘spirit’ of ‘gifting’, or of free cooperation. Substantial numbers of participants to P2P projects freely give, as do participants in LETS systems and other schemes. The difference is in the expectation that they will receive something specific and of equal value in return.
But what P2P technologies do is that they enable the creation of information-rich exchanges with dramatically lower transactional costs, thereby enabling gift economy applications, what Yochai Benkler calls a 'sharing economy', as well as numerous P2P-based market exchanges, which were not economical before. P2P is therefore conducive to a support of the growth of a gift economy based on its technology, while gifting and sharing practices are strengthened by the P2P ethos as well.
3.4.B P2P and the Market
Quite a few American authors, especially libertarians such as Eric Raymond, but also ‘common-ists’ such as Lawrence Lessig (Lessig,2004), through his arguments for a Creative Commons, says in effect that P2P processes are market-based. Is this a correct assumption? Perhaps a useful distinction is the one made by Fernand Braudel in the 'Wheels of Commerce' (Braudel, 1992), where he distinguishes the ordinary economic life of exchanges at the local level, the fairly transparent market of towns and cities, and monopolistic capitalism. It is only the latter which has a 'growth' imperative. Growth is a feature of capitalism, not of markets per se.
P2P exchange can be considered in market terms only in the sense that free individuals are free to contribute, or take what they need, following their individual inclinations, with a invisible hand bringing it all together, without monetary mechanism. Thus, it is a market only in the sense of the first and perhaps second level of distinction in the Braudel interpretation, not the third. What Braudel notes is that small markets function as 'meshworks', i.e. distributed networks, but that capitalism was always-already based on large hierarchical companies rigging the market (large relative to their markets, as was already the case in 14th century Venice). Market vendors are price takers, following supply and demand; but capital functions as a price setter, controlling the market and keeping out competitors. Thus the latter – and in the contemporary era of financially-dominated capitalism these aspects have been exacerbated to an unprecedented degree – can better be called 'anti-markets', as has been suggested by Manuel DeLanda. A market can be a necessary condition for P2P processes to occur, but the anti-markets created by capitalism are antithetical to it.
Though some programmers get paid for commons-based peer production, it is not in general their main motivation. P2P products are rarely made for the profit obtained from the exchange value, but more often and more fundamentally for their use value and acceptance by a user community. So what Lessig means by with his notion of a market-based solution is simply to say that producers are free to contribute or not, that users are free to use them or not. All this means that it is hard to pin down P2P within the old categories of left and right ideologies, it is a hybrid form with market-based and commons-based aspects. Since we have shown how P2P is in fact inextricably bound with the idea of a Commons and the intersubjective typology of communal shareholding, the equation of P2P with a market is mostly misleading.
Indeed, note how P2P differs in important aspects from a market, even genuine ones in the typology of Braudel, (Market Pricing in the typology of Fiske):
- Markets do not function according to the criteria of collective intelligence and holoptism, but rather, in the form of insect-like swarming intelligence. Yes, there are autonomous agents in a distributed environment, but each individual only sees his own immediate benefit.
- Markets are based on 'neutral' cooperation, and not on synergistic cooperation: no reciprocity is created.
- Markets operate for the exchange value and profit, not directly for the use value.
- Whereas P2P aims at full participation, markets only fulfill the needs of those with purchasing power.
Amongst the disadvantages of markets are:
- They do not function well for common needs that do not assure full payment of the service rendered (national defense, general policing, education and public health), and do not only fail to take into account negative externalities (the environment, social costs, future generations), but actively discourages such behaviour.
- Since open markets tend to lower profit and wages, it always gives rise to anti-markets, where oligopolies and monopoliies use their privileged position to have the state 'rig' the market to their benefit.
- Though market forces (in fact 'antimarket') increasingly adopt P2P-like functioning, as we have demonstrated in chapter 1 (technological base) and chapter 2 (economic usage), and as it own organizational format (as demonstrated in Empire by Negri and Hardt), in this case the distributed meshwork will always be subordinated to hierarchy or market pricing.
P2P in contrast, has a teleology of participation, leading to the opposite characteristic: if centralization or hierarchy or authority models are used, they are in the service of deepening participation. Market forces will apply P2P-like protocols which are proprietary, secret, restricted, and are opposed to this aim of participation.
3.4.C P2P and the Commons
Eric Raymond's landmark description of the Open Source model, i.e. ‘Cathedral and the Bazaar’ (Raymond, 2001), compares the different methodologies to produce software. Corporate software production methods are called ‘the Cathedral’, i.e. a big planned and bureaucratic project, while open source is coined a ‘bazaar’, a free process of cooperation involving many participants, but the concept also implies connotations with the free market idea. An argument to the contrary may be that the internet and many open source projects own their existence to the public sector, which financed internet research and the salaries of participating scientists. And the so-called ‘bazaar’ is at best a very indirect way to make money, since most of the use value generated by peer production does not easily translate into exchange value, which is most cases is derived from services and not from the ‘peer product’ itself! Moreover, in actual practice, the building of Cathedrals were massive collective projects, initiated by the Church but drawing on popular fervor, a competition in gift giving, and lots of volunteer labour!!! When we define P2P processes as a form of Communal Shareholding, the process is a lot less confused. What people are doing is voluntarily and cooperatively constructing a commons, according to the ‘communist principle’ (described by Marx in his definition of the last phase of history): from each according to his abilities, to each according to his needs’. Recognition of this non-reciprocal aspect of peer production is crucial to understand the specificities of this new mode.
Since the famous opinion storm generated by Bill Gates charge that copyright reformers were ‘communists’, it is important to stress specifically what we are talking about when we use the concept of communism as related to P2P. Let’s therefore not confuse the utopian definition of Marx, with the actual practices of the Soviet Union, which were centralized, authoritarian and totalitarian, one of the more pernicious forms of social domination. Using Fiske’s grammar of relationships, we could say that the Soviet system or ‘really existing socialism’, consisted of the following combination:
- property belonged to the state, but was in fact controlled by an elite social fraction, the nomenclatura, and did not function as common property;
- the economic practices were a combined form of equality matching and market pricing, though the monetary prices were most frequently determined not by an open market, but by political and planning authorities;
- there was no free participation but obligatory hierarchical cooperation;
- socially, there was a very strong element of authority ranking, with one’s status largely determined by one’s function in the nomenclatura.
The reason of course is that these systems arose in a context of social and material scarcity and deprivation, inevitably given rise to a process of monopolization of power for the control of scarce resources.
In contrast, Marx’s definition was predicated on abundance in the material world. If P2P emerges according to this very definition, it is because of a sufficient material base, which allows the types of volunteer labour P2P thrives on (and pays the wages of a substantial part of them), as well as the abundance inherent in the informational sphere of non-rival goods with near-zero transaction costs.
But since peer to peer is neither an ideology nor a utopian project, but an actual social practice which responds to true social needs, it can be practiced by anyone, despite one’s formal personal philosophy and eventual ideological blinders. Thus the paradox is that American libertarians call it a market, while the European digital left calls it a ‘really existing anarcho-communist practice’ (Gorz, 2003), though they are speaking of the same process. The libertarian theorists associated with the Open Source movement, can argue that there is a continuity and linkage between FLOSS philosophy and traditional liberal thought on property and community, while neo- and post-Marxist interpreters will stress how it transcends the norms of property and commodification. Since peer to peer involves both an application of freedom and of equality, it has the potential to attract supporters of both the left and the right, to the extent that they are faithful to their respective ideals.
Lawrence Lessig’s apparently tongue-in-cheek suggestion (in reply to Bill Gates equating copyright reforms with communism), to call the P2P movement’s advocates ‘Common-ists’, not a bad concept at all. Commonism is in fact a growing movement for the protection and expansion for the existing physical commons; for the creation and expansion of an expanded Information Commons and public domain; and against the deepening of intellectual property restrictions which disable the continued existence of the 'free culture development', which is a condition for the further development of P2P. How does the new 'informational commons' differ from the traditional 'physical' commons. The physical commons is about scarce 'rival' goods, and creates, under certain conditions, problems of abuse (the 'Tragedy of the Commons') and fair 'entitlements', necessitating regulation, problems which are much less acute, if not non-existent, in the sphere of overabundant information resources, though of course free and easy access to networks is by no means assured for the totality of the world population. The traditional commons, which still exists in the South amongst the native populations, is essentially 'local' and distinguishes itself by this community focus, which is contrasted from both centralized state property and resource management, and private property. They are 'limited access commons' in the sense that they are reserved for particular local communities (like pasture systems or irrigation systems). It is 'territorial' and driven by location-specific actors. However, given the severity of the ecological crises, the local Commons are also moving towards a global context. There are also important 'open access' commons which are open to all and either national (highway systems) or global in scope (air, oceans). They are in the process of becoming more and more dominated by scarcity paradigms however, due to the degradation of the resources of the biosphere. This means that from non-regulated, because of their initial abundance, they are moving towards being regulated physical commons. It has been suggested by Commons-advocate Peter Barnes that the best way to manage such physical commons, might be through ‘trusts’, a legal form that carries with it the obligation to preserve that capital, which can be financial, or physical, ensuring that the next generations have access to at least the same resource base. In addition, traditional physical Commons can be usefully divided around two significant axis: open vs. limited access, and non-regulated vs. regulated. This is also an issue which fails to arise with non-rival goods associated with an Information Commons (unless the goal is political and censorship is desired).
Type of Good
Rival 'resource' goods
Non-rival 'information' goods
Global affinity groups
What distinguishes these Commons from markets is the property regime, and they arise mostly in situations where network externalities make them the preferable option (the more they are used, the more value they obtain, and the more useful they become). In technical terms: "they have increasing returns to scale on the demand side". The initial investment may be very high, and not of interest to any private investor, i.e. roads, but once they are build, their value increases by usage and the additional cost by user becomes marginal. The Information Commons is by contrast global in its essence and from the start, organized around affinity groups. These may sometimes have a local aspect (Wikicities) but are always open to worldwide participation. Information Commons projects are driven by cyber-collectives.
I would think it likely that in a future civilisational model, both gift economy and Commons-based models would be complementary. P2P will function most easily where there is a sphere of abundance, in the sphere of non-rival goods, while gift economy models may be an alternative model to manage scarcity, in the sphere of rival goods and resources. As my own preliminary ideal in this research project, I envision the future civilization to have a core of P2P processes, surrounded by a layer of gift and fair trade applications, and with a market that operates based upon the principles of 'natural capitalism', as outlined by authors such as Hazel Henderson, David Korten and Paul Hawken, i.e. a market which has integrated 'externalities' (environmental and social costs) to arrive at true costing. We may also want to look at the now forgotten tradition of 'markets without capitalism', a tradition that was stronger before World War II. Yochai Benkler has perhaps done the most serious work in delineating the respective optimal usage of 'sharing' vs. market economies.
All this moreover with the continued use, and perhaps even strengthening of existing public institutions which intervene whenever the three above do not arrive at adequate solutions in terms of the public good.
3.4.D Who rules? Cognitive capitalists, the vectoral class, or netocrats?
We already mentioned the analysis of both the school of ‘cognitive capitalism’ and the theories of McKenzie Wark (Wark, 2004). They are part of a larger debate on the nature of the new regime of economic exchange.
According to the school of cognitive capitalism, capitalism needs to be historicized. This because the main logic of economic exchange is different. In a first phase, we have an agrarian- or merchant-based capitalism. Land is turned into capital, and commerce, especially on the basis of the triangular trade involving slavery, is the basis for producing a surplus. Non-machine assets are the key to producing the surplus, i.e. land and people. At some point, industrial capitalism arises based on capital assets in industry. The capitalists are the owners of the factories, machinery, and forges. But as these assets are abstracted into stocks, they start having their own life, both financial and informational, and industry processes are transformed into processes based on the flows of finance and information. So, according to the cognitive capitalism hypothesis, we have a third stage, cognitive capitalism, based on the predominance of immaterial flows, which in turn reconfigure industrial and agriculture modes of production to its own image. But according to the main CC theorists, such as Yann-Moulier Boutang, the editor-in-chief of Multitudes magazine, and contributors such as M. Lazzarato (Lazzarato, 2004), C. Vercellone (Vercellone, 2003), it is a change within capitalism. CC theorists argue both against neoclassical economists, which fail to historize capitalism, and against postcapitalism information age interpretations, which declare capitalism dead. In fact, if anything, there is a move to a postmodern form of hypercapitalism, of which neoliberal ideology is a symptom. The analysis of cognitive capitalism is part of a wider field of Marxist and post-Marxist interpretations of the knowledge economy.
If modernity (aka industrial capitalism) still has to compromise with a strong legacy of traditional elements, which muted its virulence (what possible use could the learning of Latin and the classics have for business!), in postmodernity, the instrumental logic reigns supreme. The interest, and in my opinion the strength of the CC hypothesis is that it can account for both radical change (the dominance of the immaterial) and for continuity (the capitalist mode), and can then start looking at the different changes taking place, such as new modes of regulation, social control, etc.. In such a scenario, the working class is also transformed, becoming involved in knowledge production, affect-based services, and other ‘immaterial forms’. But the knowledge workers clearly become the key sector of the multitudes.
McKenzie Wark adds a twist, since he insists a new class is now in power. Unlike capitalists, who based their control on capital assets, a vectoral class has arisen that owes it power to the control of information (which it owns through patents and copyrights), the stocks (archives) through which it is accessible, and the control of the vectors through which the information must flow (media). Thus, they own not only the media which manipulate our mindsets, but also achieve dominance over industrial capitalists, because they own and trade the stocks based on information, and the latter need the information flows and vectors to run the process flows. It is now no longer a matter of making profits through material industry production, but of making margins in the trading of stocks, and of the development of new monopolistic rents based on the ownership of information.
And the mirror image of the vector class is the hacker class, those that ‘produce difference’ (unlike the workers which produced standard products, and yearned to achieve unity), i.e. new value expressed through innovation. A crucial distinction between the more general concept of knowledge workers, and the more specific class concept of the hacker class, is that the latter produce new means of production, i.e. hardware, software, wetware, and they are correspondingly stronger than farmers or workers could ever have been. Therefore, what McKenzie Wark explains perhaps more cogently and starkly than CC-theorists is the new nature of the class struggle, centered around the ownership of information, and the ownership of the vectors. Thus the key issue is the property form, responsible for creating the scarcity that sustains a marketplace. Another advantage is the clear distinction between the hacker class, which produces use value, and the vectoral value, i.e. the entrepreneurs, who transform it into exchange value. The predominance of financial capital is explained by the ownership of stocks, which replaces ownership of capital, a less abstract form, and unlike industrial capitalists, who were happy to leave a common and socialized culture, education, and science to the state, vectoral capitalists differ in that they want to turn everything into a commodity. The latter is a cogent explanation of the logic behind neoliberal ‘hypercapitalism’.
Much less satisfactory is the netocratic thesis of Alexander Bard in his book Netocracy (Bard, 2002). He also insists of the postcapitalist nature of the new configuration, but the new class is described as ‘in control’ of networked information, and as operating in a hierarchy of networks. Here, we get no idea of a distinction between knowledge workers and information entrepreneurs. Similarly in Pekka Himanen’s very useful Hacker Ethic (Himanen, 2001), though we get a very interesting insight into the new culture of work, no distinction is made between knowledge workers and entrepreneurs, between the hacker class and the vectoral class.
3.4.E The emergence of a netarchy
Above I have summarized the key theses about the new 'class configuration'. In this section I offer my own take on the matter, since I am convinced that both main interpretations miss something important, that the peer to peer era is creating a new type of capitalists, which are not based on the accumulation of knowledge assets or vectors of information, but on the 'exploitation' of the networks of participatory culture.
Recall the following: the thesis of cognitive capitalism says that we have entered a new phase of capitalism based on the accumulation of knowledge assets, rather than physical production tools. The vectoralist thesis says that a new class has arisen which controls the vectors of information, i.e. the means through which information and creative products have to pass, for them to realize their exchange value. They both describe the processes of the last 40 years, say the post-1968 period, which saw a furious competition through knowledge-based competition and for the acquisition of knowledge assets, which led to the extraordinary weakening of the scientific and technical commons. And they do this rather well.
But in my opinion, both thesis fail to account for the newest of the new, i.e. to take into account the emergence of peer to peer as social format. What is happening?
In terms of knowledge creation, a vast new information commons is being created, which is increasingly out of the control of cognitive capitalism. And the new information infrastructure, cannot be said to 'belong' in any real sense to the vectoralist class.
Therefore, my hypothesis is that a new capitalist class is emerging, which I propose to call the netarchists (since netocracy 'is already taken' by Alexander Bard, and I reject his interpretation, see above). These are the forces which both 'enable' and exploit the participatory networks arising in the peer to peer era. Examples abound:
- Red Hat: it makes a living through associated services around open source and free software which, and this is crucial, it doesn't own, and doesn't need to own. We now have not only the spectacle of firms divesting their physical capital (the famous example of Alcatel divesting itself from any and all manufacturing, Nike not producing any shoe itself), but also of their intellectual capital, witness the recent gift of IBM of many patents to the open source 'patents commons'
- Amazon: yes, it does sell books, but its force comes from being the intermediary between the publishers and the consumers of books. But crucially, it success comes from enabling knowledge exchange between these customers. Without it, Amazon wouldn't quite be Amazon. It's the key to its success and valuation, otherwise it would just be another bookseller.
- Google: yes, it does own the search algorithms and the vast machinery of distributed computers. BUT, just as crucially, its value lies in the vast content created by users on the internet. Without it, Google would be nothing substantial, just another firm selling search engines to corporations. And the ranking algorithm is crucially dependent on the links towards document, i.e. the 'collective wisdom' of internet users
- EBay: it sells nothing, it just enables, and exploits, the myriad interactions between users creating markets
- Skype mobilizes the processing resources of the computers of its participating clients
- Yahoo: gets its value for being a portal and intermediary
So we can clearly see that for these firms, accumulating knowledge assets is not crucial, owning patents is not crucial. You could argue that they are 'vectors' in the sense of Wark, but they do not have a monopoly on it, as in the mass media age. Rather they are 'acceptable' intermediaries for the actors of the participatory culture. They exploit the economy of attention of the networks, even as they enable it. They are crucially dependent on the trust of the user communities. Yes, as private for-profit companies they try to rig the game, but they can only get away with so much, because, if they loose the trust, users would leave in droves, as we have seen in the extraordinary volatility of the search engine market before Google's dominance. Such companies reflect a deeper change into the general practices of business, which is increasingly being re-organized around participatory customer cultures, see section 3.1.B about the cooperative nature of cognitive capitalism, where this shift is already discussed. In chapter five, where we examine the 'physical laws' operating in networks, we see how the linear value growth of individual membership creates a economy of attention where portals and new intermediaries emerge; how the square value growth of interactions creates the transactional web and the associated platforms; and how the exponential growth of the Group-Forming-Networks quality of networks creates infinite autonomous content for ever-shifting 'infinite' affinity groups, thereby transcending the 'economy of attention' characteristics. (Ebay profits from the three properties:
- as an intermediary to content, i.e. what is available where,
- from the transactions amongst its members, and
- from their ability to form auction groups themselves.)
My conclusion is that the emergence of P2P begets a new capitalist sub-class, which accommodates itself with the networks, places itself at crucial nodes and proposes itself as voluntary hubs, rather than living off knowledge assets. In this sense, vectoralists, even as they ascend to the heights of power through restrictive copyright legislation, have already reached the zenith of their power, and they will eventually be replaced by new formats of capitalist exploitation, which accommodate themselves in much more intelligent ways to the peer to peer realities.
At the same time, we might except peer to peer exchanges that fall outside of any for-profit priorities, and businesses from the social economy sector, for whom profit is a subsidiary concern. This new sector may seem marginal today, but is in my opinion, 'the next wave' in terms of new types of corporations.
There is another aspect in which the concept of netarchy is useful. Throughout this essay we always stress the double nature of P2P: a form in which it is the infrastructure (technical, collaborative, etc..) of the current system; and a form in which it transcends the current system pointing towards an alternative economic organisation. In one way, distributed networks and P2P-like processes can be used to reinforce Empire, in another way, to combat it. Ideologically, there will be those who favour P2P but see capitalism as the endgame of history, who cannot imagine an alternative; while others, including myself, see it as the premise of radical social change. It is easy to see how the first position can be termed netarchical, since it inevitable accepts and glorifies the for-profit appropriation of the participatory networks, while the latter will favour autonomous cooperation.
This is not to say that netarchy does not play a useful role. New classes at first usually play a progressive role, riding on the back of new productive possibilities. And such is the role of netarchy. Compared to the cognitive capitalists and vectoralists, who respectively monopolise knowledge assets and information vectors, netarchists need neither one nor the other. Thus they do not side with the forces trying to rig computers with digital rights management restrictions, nor with the forces putting young people who share music in jail. Rather they will try to both enable and use the new practices, on the one hand 'making them safe for capitalism', but also funding, technologically developing and enabling new P2P processes. Acting as intermediaries between both worlds, they look for 'reformist' solutions as it were.
The netarchical ideology has its expression especially in the international political economy, especially in the form of 'bottom-of-the-pyramid' economic development, as championed by C.H. Prahalad. Prahalad and the movement he inspired recognize that the one billion people at the bottom of the pyramid manage to have a cash flow of $2 per day, even though they do not have the capital. And Hernando de Soto shows how this capital can be partly generated by 'formalising' the informal capital that they often do have, but that the current institutional framework cannot recognize. Thus Prahalad and others try to convince capital and development institutions to develop solutions like micro-banking, creating bottom-up collectives of the most poor and a virtuous cycle. A bottom-up, distributed form of capitalism if you like, which shows an uncanny resemblance to P2P processes, and this is why we consider this position to be netarchical. The problem with these solutions is that they often aim to 'capitalise' everything, and do not have any regard for the surviving forms of the commons which are still very much alive in certain areas of the South, destroying the traditional social fabric. The profit requirement – and one cannot see how the current 15% profit requirement of financial investors and multinational corporations can lead to any permanent engagement of these forces in B.O.P. projects.
Jock Gill of the Greater Democracy weblog has criticized BOP schemes for these reasons, and has offered an alternative approach: namely citizen-to-citizen or 'edge to edge' development partnerships. Whereby collectives of individuals with capital, would directly provide collectives of individuals without capital, with the necessary amounts of small capital, and without imposing the profit requirement. Such practices are already widespread within the U.S. themselves, in the form of Gifting Circles, whereby local groups collate gifting money of its members, study options for giving together, and decide on appropriate local initiatives to support.
- Coase's Penguin, or Linux and the Nature of the Firm. Yochai Benkler.
URL = http://www.benkler.org/CoasesPenguin.html
- Principles of the free software movement, described at Fsf.org:
"Free software" is a matter of liberty, not price. To understand the concept, you should think of "free" as in "free speech," not as in "free beer."
Free software is a matter of the users' freedom to run, copy, distribute, study, change and improve the software. More precisely, it refers to four kinds of freedom, for the users of the software:
- The freedom to run the program, for any purpose (freedom 0).
- The freedom to study how the program works, and adapt it to your needs (freedom 1). Access to the source code is a precondition for this.
- The freedom to redistribute copies so you can help your neighbour (freedom 2).
- The freedom to improve the program, and release your improvements to the public, so that the whole community benefits. (freedom 3). Access to the source code is a precondition for this.” (Stallman website)
- The freedom to run the program, for any purpose (freedom 0).
- The GPL license explained:
"The GPL governs the programming instructions called source code that developers write and then convert into the binary files that computers understand. At its heart, the GPL permits anyone to see, modify and redistribute that source code, as long as they make changes available publicly and license them under the GPL. That contrasts with some licenses used in open-source projects that permit source code to be made proprietary. Another requirement is that GPL software may be tightly integrated only with other software that also is governed by the GPL. That provision helps to create a growing pool of GPL software, but it's also spurred some to label the license "viral," raising the specter that the inadvertent or surreptitious inclusion of GPL code in a proprietary product would require the release of all source code under the GPL."
An article about the 'copyleft attitude' and the emergence of the free art license, at http://infos.samizdat.net/article301.html
Richard Stallman on the free software principles:
"My work on free software is motivated by an idealistic goal: spreading freedom and cooperation. I want to encourage free software to spread, replacing proprietary software that forbids cooperation, and thus make our society better. That's the basic reason why the GNU General Public License is written the way it is—as a copyleft. All code added to a GPL-covered program must be free software, even if it is put in a separate file. I make my code available for use in free software, and not for use in proprietary software, in order to encourage other people who write software to make it free as well. I figure that since proprietary software developers use copyright to stop us from sharing, we cooperators can use copyright to give other cooperators an advantage of their own: they can use our code.:"
French-language interview with Stallman: http://multitudes.samizdat.net/article.php3?id_article=214
- Richard Stallman on why it is okay to charge for free software:
"The word "free" has two legitimate general meanings; it can refer either to freedom or to price. When we speak of "free software", we're talking about freedom, not price. (Think of "free speech", not "free beer".) Specifically, it means that a user is free to run the program, change the program, and redistribute the program with or without changes. Free programs are sometimes distributed gratis, and sometimes for a substantial price. Often the same program is available in both ways from different places. The program is free regardless of the price, because users have freedom in using it."
- The Consensus of the Open Sources Initiative
Open Source projects are fundamentally similar to Free Software in that they both forbid any restriction on the free distribution of the software and on the availability of the source code. The following principles are accepted to define an Open Source project:
- no restriction on the free distribution is allowed (but payment is allowed)
- the source must be freely available to all at no cost
- changes must be accepted and distributed
- the author can request a protected version number
- no discrimination in usage is allowed, for every activity, including commercial usage
- the rights attached to any program are for all the users all of the time
- the license cannot be program specific (to avoid commercial restrictions)
- the license cannot be applied to other code (such as proprietary additions)
- the license must be technologically neutral (not restricted to certain devices or operating systems)
- no restriction on the free distribution is allowed (but payment is allowed)
- Steve Weber, professor of political science at U.C. Berkeley, maintains:
“that the open source community has built a mini-economy around the counterintuitive notion that the core property right in software code is the right to distribute, not to exclude. And it works! This is profound ”and has much broader implications for the property rights regimes that underpin other industries, from music and film to pharmaceuticals. Open source is transforming how we think about "intellectual" products, creativity, cooperation, and ownership—issues that will, in turn, shape the kind of society, economy, and community we build in the digital era.” (publisher statement)
- Overview of the commercial uptake of Open Source software, June 2005 update
"And so Linux entered commercial use. Its first, and still most successful, niche was Web servers; for at least five years, the majority of the world's Web servers have used open-source software. Then, several years ago, IBM started to contribute money and programmers to open-source efforts. IBM, Intel, and Dell invested in Red Hat Software, the leading commercial Linux vendor, and Oracle modified its database products to work with Linux. In late 2003, Novell announced its purchase of SuSE, a small German Linux vendor, for more than $200 million. IBM invested $50 million in Novell. IBM, Hewlett-Packard, and Dell began to sell hardware with Linux preinstalled. IBM also supports the Mozilla Foundation, developer of the open-source Firefox browser, and with Intel, HP, and other companies recently created the Open Source Development Labs (OSDL), a consortium promoting the business use of Linux, which has hired Torvalds and other open-source developers. Now, Linux is running on everything from $80 routers to cell phones to IBM mainframes, and is much more common on desktop PCs. Red Hat is a highly profitable $200 million company growing 50 percent per year, and commercial open-source vendors serve many important software markets. For instance, in databases, there is MySQL, which now has annual revenues of about $20 million, doubling every year. In application servers, there is JBoss, and in Web servers, Covalent. In the server market, the eventual dominance of Linux seems a foregone conclusion. Michael Tiemann, Red Hat's vice president for open-source affairs, told me, "Unix is already defeated, and there's really nothing Microsoft can do either. It's ours to lose." Of course, Microsoft, which refused all interview requests for this article, sees things differently. But surveys from IDC indicate that in the server market, Linux revenues are growing at more than 40 percent per year, versus less than 20 percent per year for Windows. Unix, meanwhile, is declining."
(Charles Ferguson, Technology Review, June 2005, at http://technologyreview.com/articles/05/06/issue/feature_linux.asp?p=2 )
- The Professionalization of Linux
The following article by Business Week is the result of an in-depth investigation regarding the actual production of Linux:
“Little understood by the outside world, the community of Linux programmers has evolved in recent years into something much more mature, organized, and efficient. Put bluntly, Linux has turned pro. Torvalds now has a team of lieutenants, nearly all of them employed by tech companies, that oversees development of top-priority projects. Tech giants such as IBM (IBM ), Hewlett-Packard (HPQ ), and Intel (INTC ) are clustered around the Finn, contributing technology, marketing muscle, and thousands of professional programmers. IBM alone has 600 programmers dedicated to Linux, up from two in 1999. There's even a board of directors that helps set the priorities for Linux development. Not that this Inc. operates like a traditional corporation. Hardly. There's no headquarters, no CEO, and no annual report. And it's not a single company. Rather, it's a cooperative venture in which employees at about two dozen companies, along with thousands of individuals, work together to improve Linux software. The tech companies contribute sweat equity to the project, largely by paying programmers' salaries, and then make money by selling products and services around the Linux operating system. They don't charge for Linux itself, since under the cooperative's rules the software is available to all comers for free."
Richard Stallman in a recent interview on where Free Software and the GPL are heading, at http://www.ofb.biz/modules.php?name=News&file=article&sid=353
- Personal characteristics of FLOSS developers, an Asian survey and study:
- Development time is short (less than 5 hours par week);
- Main targets of development are networks and Web services.
- The number of projects are few, but about half of the developers have leadership experience;
- More then 40 percent acts globally in Japan and Asia;
- Many developers are not engaged in programming work;
- Most developers learn their skill by themselves and do not have an interest in formal qualifications;
- Main purpose is to obtain and share skills and knowledge;
- About 60 percent of the developers regard their signature as important;
- Main sources of assistance are government agencies and public foundations in Japan, educational institutions in Asia, and various organizations and individuals in US
- Development time is short (less than 5 hours par week);
- Research into Open Source as a collaborative social process
FLOSS-POLS, a EU-funded research project, claims it is "the single largest knowledge base on open source usage and development worldwide" and its 'third track' examines " the efficiency of open source as a system for collaborative problem-solving", see at http://www.flosspols.org/ . The peer-reviewed journal First Monday dedicated a special issue to 'open source as a social process', at http://www.firstmonday.org/issues/issue9_11/index.html
See in particular: Item1, http://www.firstmonday.org/issues/issue9_11/lehmann/index.html " This paper takes a closer look at FLOSS developers and their projects to find out how they work, what holds them together and how they interact."; Item 2, on accountability in Open Source projects, at http://www.firstmonday.org/issues/issue9_11/david/index.html
What these various studies suggest is that FLOSS projects have a onion-like structure:
"The focus of these studies has largely been on the contribution of code and they therefore have largely discussed development centralization. At the center of the onion are the core developers, who contribute most of the code and oversee the design and evolution of the project. In the next ring out are the co–developers who submit patches (e.g., bug fixes) which are reviewed and checked in by core developers. Further out are the active users who do not contribute code but provide use–cases and bug–reports as well as testing new releases. Further out still, and with a virtually unknowable boundary, are the passive users of the software who do not speak on the project’s lists or forums." (http://www.firstmonday.org/issues/issue10_2/crowston/index.html )
- Production without a manufacturer, an example from the field of music:
"Record companies, schmecord companies – who needs ‘em? That’s not where the money is. The business is with the real customers – the fans. That’s who we’re trying to connect with," band member Frank Black, AKA Black Francis, told the Associated Press this week. "I never really was much of a believer in the album anyway," Black said. "Singles are what people relate to." Apparently, the band doesnt feel it needs a record label any more and, while their plans are still unformed at the moment, the idea generally is to combine selling live CDs made and then sold at concerts, producing music for movies and commercials and distributing singles via the internet.."
(email communication from Christophe Lestavel, original source DM Europe at http://www.dmeurope.com/ )
- Production without a manufacturer, or the supply-side supplying itself:
"Few people in mainstream world even recognize that a radically new kind of economics is emerging – the “demand-side” supplying itself! Searls said that open source is the victory of ST – “social technology” – over IT – information technology. This stems directly from the commons principles that lie at the heart of the Internet – “No one owns it. Everybody can use it. Anyone can improve it. One comment by Searls really reverberated with me. He said that the word “authority” means that we grant certain people the right to “author” who we are. Now that hierarchical authority is being supplanted by decentralized, networked authority, in effect, “We are all the authors of each other."
(copy from unknown blog, received by personal communication)
- See also an analysis of the relation
between free software and capitalism, at
- Structural use of interactive consumers to externalize costs, by Johan Soderbergh:
"The shifting of time-consuming tasks from paid employees to unpaid customers when accessing banking services, is one example of enhanced interactivity. Another example would be the 15.000 volunteer maintainers of AOL’s chat-rooms. Or the attempt by the Open Source initiative to co-opt the labour power of free software engineers. These are high points in a broader pattern, according to Tiziana Terranova. Free labour has become structural to late capitalist cultural economy. It is therefore totally inadequate to apply the leftist favourite narrative of authentic subcultures that are hijacked by commercialism. Authentic subcultures at this point of time is a delusion, she charges. ‘Independent’ cultural production takes place within a broader capitalist framework which has already anticipated and therefore modified the ‘active consumer’. Interactivity counts to nothing else than intensified exploitation of the audience power of the user/consumer. It is not different to the intensification of exploitation of wage labourers."
- How the use of FLOSS methods leads to lower transaction costs in business, at http://www.firstmonday.org/issues/issue9_11/soares/index.html
- The history of Linux
"This paper will establish the development of Linux, complexity theory and its relationship to Linux, the Linux business model, rules governing Linux and the possible lessons that future managers can learn. Comprehensive ranges of secondary sources have been used to compile a detailed but accurate picture of this fascinating story of Linux.”
- FS/OS development in Asia:
Linux making great strides in China, at http://www.businessweek.com/technology/content/nov2004/tc20041115_4873_tc057.htm?
Characteristics of Asian open source development, http://www.firstmonday.org/issues/issue9_11/shimizu/index.html
Home page for Asian OSS, at http://www.asia-oss.org
- Firefox, the alternative browser
“Tuesday, the answer to IE arrived: a safe, free, fast, simple and compatible browser called Mozilla Firefox. Firefox (available for Win 98 or newer, Mac OS X and Linux at www.mozilla.org) is an unlikely rival, developed by a small nonprofit group with extensive volunteer help. Its code dates to Netscape and its open-source successor, Mozilla, but in the two years since Firefox debuted as a minimal, browser-only offshoot of those sprawling suites, it has grown into a remarkable product. Firefox displays an elegant simplicity within and without."
The Linux desktop:
"as DESKTOP OPERATING SYSTEM, replace MS Windows with Linspire Lindows, Gnome, or BeOS Max' as INSTANT MESSAGING SERVICE, replace AOL AIM with Jabber; as OFFICE SUITE, replace MS Office with OpenOffice or Gnome Office ; as ACCOUNTING PROGRAM, replace Inuit with Compiere; for PROJECT MANAGEMENT, replace IBM Lotus Notes, with Horde Project, or Net Office Project; as DATABASE PROGRAM, replace MS Access with Twiki, Druid, Gnome DB ; for FAX MGT., replace Esher VSI Fax, with HylaFax or Mgetty+Sendfax; for BROWSING, replace Internet Explorer with Firefox."
(personal communication, inspired by a Wired article)
Mono is an open source alternative to the Microsoft .Net specifications, at http://www.mono-project.com/about/index.html
Five fundamental reasons why Open Source projects do not make great inroads amongst ordinary users, at http://www.firstmonday.org/issues/issue9_4/levesque/index.html#l5
- The Windows ecosystem is in danger
"The software ecosystem consists of every program written for a particular piece of software or hardware. These include operating system ports and reference designs in the case of hardware, and most often applications in the case of software. It is very very hard for any company to carry a platform on its own. The more other companies contribute to that platform, by writing software that works on the platform, the more that weight is lifted off the creator’s shoulders and shared by others. Linux has a very robust software ecosystem. My point last week was that the Windows software ecosystem is weakening. The evolution of technology indicates that a weakening ecosystem presages a dying ecosystem, and then a dying product line. IBM saw this first-hand in its mainframe and minicomputer product lines. Now IBM is attached to a large, vibrant growing ecosystem while, as I noted, that of Microsoft Windows is weakening — becoming ever-more dependent on Microsoft itself for growth."
- An in-depth series of reports on the usage of FLOSS methodologies and their institutionalization,
http://www.infonomics.nl/FLOSS/report/, June 2002
- Co-founder Jimmy Wales on the ambitious aims of Wikipedia
"One of the most important things to know about Wikipedia is that it is free to license and that the free license enables other people to freely copy, redistribute, modify our work both commercially and non-commercially. We are licensed under the GNU Free Documentation License and we've been around since January 2001, so that's about four years ago. The Wikimedia Foundation is our non-profit organization that I founded about a year and a half ago and transferred all the assets into the foundation, so the foundation actually manages the website and runs everything. The mission statement of the foundation is to distribute a free encyclopedia to every single person on the planet in their own language. And we really mean that because every single person on the planet, this includes a lot more than just a cool website."
(Jimmy Wales lecture at Stanford University, 2-9-2005, quoted by Howard Rheingold on the SmartMob blog)
Wikipedia.org: The pro's and cons of Wikipedia (vs. traditional encyclopedia production) are discussed in this article: http://soufron.free.fr/soufron-spip/article.php3?id_article=57
This paper explores the character of “mutual aid” and interdependent decision making within the Wikipedia at http://reagle.org/joseph/2004/agree/wikip-agree.html
A profile of the most prolific contributors and the values driving them, at http://www.wired.com/news/culture/0,1284,66814,00.html?
- See the page on this wiki on Cognitive Capitalism
- Example of innovation as a diffuse process, from a report by Business Week:
"To get an idea of how diffuse the innovation process has become, try dissecting your new PDA, digital cameraphone, notebook PC, or cable set-top box. You will probably find a virtual U.N. of intellectual-property suppliers. The central processor may have come from Texas Instruments (TXN ) or Intel, and the operating system from BlackBerry (RIMM ), Symbian, or Microsoft. The circuit board may have been designed by Chinese engineers. The dozens of specialty chips and blocks of embedded software responsible for the dazzling video or crystal-clear audio may have come from chip designers in Taiwan, Austria, Ireland, or India. The color display likely came from South Korea, the high-grade lens from Japan or Germany. The cellular links may be of Nordic or French origin. If the device has Bluetooth technology, which lets digital appliances talk to each other, it may have been licensed from IXI Mobile Inc., one of dozens of Israeli wireless-telecom companies spun off from the defense industry."
- The socialization of innovation 'outside' of the enterprise
"Only a fraction of the aesthetic innovations made in society occurs within the wage labour relation. That is, in the space conceptualised by Tessa Morris-Suzuki as ‘before’ production, in laboratories and in ad agencies. Most aesthetic innovation takes place ‘after’ production. It happens 'after' the wage labour relation, in consumption, in communities, on the street, and on the school yard. It is here the social factory casts its long shadow. The social factory is a place with no walls, no gates, no boss, – and yet rift with antagonism."
(Jan Soderbergh in http://info.interactivist.net/article.pl?sid=04/09/29/1411223)
The contribution by Tessa Morris-Suzuki mentioned above was written in: Jim Davis, Thomas A. Hirschl & Michael Stack, eds. Cutting edge: technology, information capitalism and social revolution, 1997
- Von Hippel on 'lead users
"Eric von Hippel's new book, Democratizing Innovation, documents how breakthrough innovations are developed by "lead users," – users with a high incentive to solve problem, and that often develop solutions that the market will want in the future. Von Hippel argues that a user-centered innovation process – one that harnesses lead users – offers great advantages over the manufacturer-centric innovation model that has been the mainstay of commerce for hundreds of years. To this end, he has developed a systematic model for companies to tap into the innovation potential of their lead user communities." (quote from the Smart Mobs weblog)
An interview with the author where he explains the concept of "lead users", at http://www.thefeature.com/article?articleid=101525&ref=6647666
More essays by the author at http://web.mit.edu/evhippel/www/papers.htm
- User-centered innovation practices vs. manufacturer-centric innovation
"When I say that innovation is being democratized, I mean that users of products and services—both firms and individual consumers—are increasingly able to innovate for themselves. User-centered innovation processes offer great advantages over the manufacturer-centric innovation development systems that have been the mainstay of commerce for hundreds of years. Users that innovate can develop exactly what they want, rather than relying on manufacturers to act as their (often very imperfect) agents. Moreover, individual users do not have to develop everything they need on their own: they can benefit from innovations developed and freely shared by others. The trend toward democratization of innovation applies to information products such as software and also to physical products.
The user-centered innovation process just illustrated is in sharp contrast to the traditional model, in which products and services are developed by manufacturers in a closed way, the manufacturers using patents, copyrights, and other protections to prevent imitators from free riding on their innovation investments. In this traditional model, a user’s only role is to have needs, which manufacturers then identify and fill by designing and producing new products. The manufacturer-centric model does fit some fields and conditions. However, a growing body of empirical work shows that users are the first to develop many and perhaps most new industrial and consumer products. Further, the contribution of users is growing steadily larger as a result of continuing advances in computer and communications capabilities. In this book I explain in detail how the emerging process of user-centric, democratized innovation works. I also explain how innovation by users provides a very necessary complement to and feedstock for manufacturer innovation. The ongoing shift of innovation to users has some very attractive qualities. It is becoming progressively easier for many users to get precisely what they want by designing it for themselves. And innovation by users appears to increase social welfare. At the same time, the ongoing shift of product-development activities from manufacturers to users is painful and difficult for many manufacturers. Open, distributed innovation is “attacking” a major structure of the social division of labour. Many firms and industries must make fundamental changes to long-held business models in order to adapt. Further, governmental policy and legislation sometimes preferentially supports innovation by manufacturers. Considerations of social welfare suggest that this must change. The workings of the intellectual property system are of special concern. But despite the difficulties, a democratized and user-centric system of innovation appears well worth striving for."
- Examples of user innovation communities at work
The music identification technology of Gracenotes, was almost entirely produced by music fans, at http://www.wired.com/news/digiwood/0,1412,64033,00.html?. But, because it has turned private, MusicBrainz has been created as a true open source alternative; iPodLounge contains more than 220 creative designs of future iPods, at http://www.wired.com/news/mac/0,2125,63903,00.html?
- 'Customer-made' production and marketing, special issue of Trendwatching newsletter, May 2005, at
http://www.trendwatching.com/newsletter/newsletter.html . Its June 2005 issue covers twinsumers, how
collaborative software is bringing consumers of similar taste together.
- Personal Fabrication technology
"What if you could design and produce your own products, in your own home, with a machine that can be used to make almost anything? Imagine if you didn't have to wait for a company to sell the product you wanted but could use your own personal fabricator to create it instead. Neil Gershenfeld, Director of MIT's Center for Bits and Atoms, believes that personal fabricators will allow us to do just that and revolutionize our world.
His most recent book, FAB: The Coming Revolution on Your Desktop—From Personal Computers to Personal Fabrication, explores the ability to design and produce your own products, in your own home, with a machine that combines consumer electronics with industrial tools. Such machines, Personal fabricators, offer the promise of making almost anything-including new personal fabricators and as a result revolutionize the world just as personal computers did a generation ago."
See also at iFabricate.com and the Fab Labs at MIT, at http://cba.mit.edu/projects/fablab
helps corporations implement open source methodologies, at http://www.collab.net/
- A successful corporate adoption of the participatory model, the SEMCO case
In the book,The Seven-Day Weekend: Changing the Way Work Works, CEO Ricardo Semler explains the counter-intuitive m measures he took to make his company successful, by relying on the self-organisation skills of his workers. A paradoxical top-down implementation of the hacker culture:
- Give up control (e.g., no organization charts, dress code, fixed offices or policies; complete flex-time for all workers, including those on assembly lines).
- Share information (e.g., make all salaries public and invite everyone to attend board meetings; Semler even shares profit calculations with customers).
- Encourage self-management (i.e., force people to think independently, question everything, and solve their own problems; manage by doing nothing yourself when problems arise).
- Discourage uniformity (e.g., rotate jobs, allow extreme flexibility in work and pay).
- Give up control (e.g., no organization charts, dress code, fixed offices or policies; complete flex-time for all workers, including those on assembly lines).
- User-driven advertising
Increasingly, users are themselves distributing information about products and services that they appreciate, see the Wired article on a famous user-made iPod ad, at http://www.wired.com/news/mac/0,2125,66001,00.html? ; Companies are also learning to use (and abuse) these communities of 'passionate consumers', according to this report in Le Monde, at http://www.lemonde.fr/web/article/0,[email protected],36-396272,0.html
- Coordination Theory
“Thomas Malone: What I mean by coordination theory is that body of theory and principles that help explain the phenomena of coordination in whatever systems they arise. Now what do I mean by coordination? We define coordination as the management of dependencies among activities. Now how do we proceed on the path of developing coordination theory? The work we've done so far says that if coordination is the managing of dependencies among activities, a very useful next step is to say: what kinds of dependencies among activities are possible? We've identified three types of dependencies that we call atomic or elementary dependency types. Our hypothesis is that all the dependencies, all the relationships in the world, can be analyzed as either combinations of or more specialized types of these three elementary types. The three are: flow, sharing, and fit. Flow occurs whenever one activity produces some resource used by another activity. Sharing occurs when a single resource is used by multiple activities. And fit occurs when multiple activities collectively produce a single resource. So those are the three topological possibilities for how two activities and one resource can be arranged. And each of them has a clear analog in the world of business or any of the other kinds of systems we talked about.
Flow is probably the most obvious. It happens all over the place, and in some ways is the most elementary of all. Sharing also happens a lot whenever you've got one resource shared by multiple people or activities, whether that resource is a machine on a factory floor, a budget of money, or a room, or whatever needs to be used potentially by multiple activities. The least obvious is the last one called fit. A good example of where that occurs would be if you have engineers designing a car. One engineer is designing the engine, another engineer designing the body, and so forth. There's a dependency between the activities of those engineers that arises from the fact that all of the pieces have to fit together in the same car. So the idea is that, for each of these types of dependencies, there's a family of possible coordination processes that can be used to manage it. For instance, with a sharing dependency, one way of managing that is by first come, first served. Another way of managing that is by priorities: the [people with the] highest-priority activity get to use the resource as long as they need it, as long as there's no other higher-priority activity there. And for each of the other types of dependencies you can have a similar kind of family of coordination processes for managing them, some of which are centralized, some of which are decentralized."
Book by the author: Thomas Malone: Coordination Theory and Collaboration Technology
- Open Business Process Initiative
OPHI is a group of organizations and individuals dedicated to developing an on-line collection of knowledge about business processes that is freely available to the general public under an innovative form of "open source" licensing. More info at: http://ccs.mit.edu/ophi/index.htm
for an overview of designing corporations around customer cultures, at
explanation of the concept of the general intellect, at
- Cognitive capitalism
«La thèse défendue ici sera celle d'une nouvelle “grande transformation” (pour reprendre l'expression de Karl Polanyi) de l'économie et donc de l'économie politique (…) Certes, ce n'est pas une rupture dans le mode de production car nous sommes toujours dans le capitalisme, mais les composantes de ce dernier sont aussi renouvelées que celles du capitalisme industriel ont pu l'être par rapport au capitalisme marchand (en particulier dans le statut du travail dépendant qui passe du second servage et esclavage au salariat libre). Pour désigner la métamorphose en cours nous recourrons à la notion de capitalisme cognitif comme troisième espèce de capitalisme.»
Yann-Moulier Boutang in http://multitudes.samizdat.net/article.php3?id_article=1656 ; See also http://www.ish-lyon.cnrs.fr/labo/walras/Objets/New/20021214/YMB.pdf
Self-organisation and cooperation in cognitive capitalism, special issue of Solaris magazine, at http://biblio-fr.info.unicaen.fr/bnum/jelec/Solaris/d05/5introduction.html , http://biblio-fr.info.unicaen.fr/bnum/jelec/Solaris/d05/5link-pezet.html
Critique from the French Trotskyist Michel Husson, in: : Sommes nous entrés dans le capitalisme cognitif? Critique communiste n°169-170, été-automne 2003
- The Regulation School: some documentation
Some recent articles and essays from a newsletter associated with the Regulation school, will give you an idea of the high quality and level of interest of their production:
On the concept of ‘worldwide public goods’, at http://www.upmfgrenoble.fr/irepd/regulation/Lettre_regulation/lettrepdf/LR48.pdf ; the current phase of American hegemony is unsustainable, at http://www.upmf-grenoble.fr/irepd/regulation/Lettre_regulation/lettrepdf/LR46.pdf ; on the need to reconsider our outdated notions of productivity, which have no bearing on the current situation, at http://www.upmf-grenoble.fr/irepd/regulation/Lettre_regulation/lettrepdf/LR43.pdf ; an overview of intellectual property regimes and their evolution, at http://www.upmfgrenoble.fr/irepd/regulation/Lettre_regulation/index.html
- on the soul-destroying corporate cultures:
“Whether it is in response to us sensing that a new possibility exists for us on the horizons of our current ways of being, or whether it is to do with us sensing an increasing lack, is difficult to say. But, which ever it is, there is no doubt that there is an increasing recognition that the administrative and organization systems, within which we have long tried to relate ourselves to each other and our surroundings, are crippling us. Something is amiss. They have no place in them for us, for our humanness. While the information revolution bursts out around us, there is an emerging sense that those moments in which we are most truly alive and able to express our own unique creative reactions to the others and othernesses around us (and they to us), are being eliminated. In an over-populated world, there seems to be fewer and fewer people to talk to – and less and less time in which to do it.”
- "Management-by-objectives" as a feudal structure:
By Robert Jackall, “Moral Mazes”, 1988, in fact a in-depth anthropological study of the modern entreprise format:
"When managers describe their work to an outsider, they almost always first say: 'I work for [Bill James]' or 'I report to [Harry Mills].' and only then proceed to describe their actual work functions ... The key interlocking mechanism of [modern corporate culture] is its reporting system. Each manager ... formulates his commitments to his boss; this boss takes these commitments and those of his other subordinates, and in turn makes a commitment to his boss ... This 'management-by-objective' system, as it is usually called, creates a chain of commitments from the CEO down to the lowliest product manager or account executive. In practice, it also shapes a patrimonial authority arrangement that is crucial to defining both the immediate experiences and the long-run career chances of individual managers. In this world, a subordinate owes fealty principally to his immediate boss."
Moral Mazes goes on to describe how bosses use ambiguity with their subordinates (and other more-or-less unconscious subterfuges) in order to preserve the power to claim credit and deflect blame, which tends to perpetuate the personalization of authority. Unlike a straight, Max Weber style bureaucracy, which is procedure-bound and rule-driven, a patrimonial bureaucracy is a set of hierarchical fiefdoms defined by personal power and patronage.”
- David Isen on the inefficient nature of pyramidal intelligence:
“When there is good news, credit flows up – so the boss, personifying the organization, looks good to superiors. Then credit flows up again. When there is bad news, it is the boss's prerogative to push blame onto subordinates to keep it from escalating. Bad news that can't be contained threatens a boss's position; if bad news rises up, blame will come down. This is why they shoot messengers. So it's easier to ignore bad news. Thus, Jackall's chemical company studiously ignored a $6 million maintenance item until it exploded (literally) into a $150 Million problem. "To make a decision ahead of [its] time risks political catastrophe," said one manager, justifying the deferred maintenance. Then, once the mess had been made, "The decision [to clean up] made itself," said another relieved manager.” (http://isen.com/archives/990601.html)
- French 'sociologist of work', Philippe Zafirian, on the unease of workers in the contemporary enteprise:
“Depuis plusieurs années, les enquêtes nationales ne cessent de nous indiquer une nette dégradation des conditions de travail, telle que les salariés la vivent et la déclarent. Les enquêtes sociologique de terrain le confirment : c'est à un phénomène de vaste ampleur que nous avons affaire. Les individus au travail souffrent et ils l'expriment. On pourrait certes débattre des moteurs internes de cette souffrance : tous les chercheurs ne sont pas d'accord sur ce point. Mais il me semble qu'une réalité s'impose, par son évidence et son importance : les salariés plient sous la pression, elle les écrase. La pression n'est pas simple contrainte. Toute personne se développe en permanence, dans sa vie personnelle, dans un réseau de contraintes. Les indicateurs de cette pression, nous les connaissons bien : débit, rendement, délais clients, challenges, pression des résultats à atteindre, précarité de la situation, organisation de la concurrence entre salariés, salaire individuel variable… On y relève à la fois la reprise de vieilles recettes tayloriennes, mais aussi quelque chose de nouveau, de plus insidieux : la pression sur la subjectivité même de l'individu au travail, une force qui s'exerce sur son esprit, qui l'opprime de l'intérieur de lui-même, qui l'aliène. Mais il existe une autre facette de la situation actuelle : la montée de la révolte. Celle-ci transparaît beaucoup moins dans les statistiques ; elle s'extériorise moins en termes de conflits ouverts. Toutefois, pour un sociologue qui mène en permanence des enquêtes de terrain, le fait est peu contestable. On peut pressentir l'explosion d'une révolte d'une portée équivalente à celle qui a secoué la France à la fin des années 60, début des années 70, lors des grandes insurrections des O.S (red : ‘Ouvriers Specialises’)., quelles que soit les formes d'extériorisation qu'elle prendra. La révolte n'est pas simple réaction à la pression. Elle a des causes plus profondes. Elle renvoie d'abord à une évolution profonde, irréversible, de la libre individualité dans une société moderne. Elle touche enfin à ce phénomène important : à force de devoir se confronter à des performances, à des indicateurs de gestion, à une responsabilité quant au service rendu à l'usager ou au client, les salariés ont développé une intelligence des questions de stratégie d'entreprise. Ils jugent, et d'une certaine manière comprennent les politiques de leurs directions, voire en situent les contradictions et insuffisances. Mais il leur est d'autant plus insupportable d'être traités comme de purs exécutants, des machines sans âme et sans pensée propre, d'être en permanence mis devant le fait accompli. Je pense que notre époque connaît un véritable renversement : bien des salariés de base deviennent plus intelligents que leurs directions et que les actionnaires, au sens d'une pensée plus riche, plus complexe, plus subtile, plus compréhensive, plus profondément innovante.»
(Zafirian personal website: http://perso.wanadoo.fr/philippe.zarifian/)
See also these two important contributions on ‘the new nature of work’, 3 theses from Philippe Zafirian, based on seven years of study in large institutions and companies, at http://seminaire.samizdat.net/article.php3?id_article=22 3 thesis on work and cognitive capitalism, by Patrick Dieuaide, at http://seminaire.samizdat.net/article.php3?id_article=12
- A quote from the back cover of The Hacker Ethic, by Pekka Himanen:
“Nearly a century ago, Max Weber articulated the animating spirit of the industrial age, the Protestant ethic. Now, Pekka Himanen – together with Linus Torvalds and Manuel Castells – articulates how hackers represent a new, opposing ethos for the information age. Underlying hackers' technical creations – such as the internet and the personal computer, which have become symbols of our time – are the hacker values that produced them and that challenge us all. These values promoted passionate and freely rhythmed work; the belief that individuals can create great things by joining forces in imaginative ways; and the need to maintain our existing ethical ideals, such as privacy and equality, in our new, increasingly technologized society.
- The dissatisfaction of the workforce, a report from France:
"La prise de distance d'un nombre croissant de salariés vis-à-vis du monde de l'entreprise …[est] un mouvement qui concerne l'ensemble des pays développés, bien au-delà d'un éventuel "effet 35 heures" franco-français. "L'adhésion des quadras n'a plus rien à voir avec celle des baby boomers de 55 ou 60 ans, qui ont pourtant connu ou fait Mai 68, mais qui n'ont pas, pour autant, remis en cause l'entreprise", reconnaît Jean-René Buisson, ancien directeur général des ressources humaines de Danone, désormais président de l'Association nationale des industries agroalimentaires (ANIA). Un constat quantifié par un récent sondage de la société Chronopost : désormais seul un salarié de moins de 35 ans sur cinq déclare "s’Impliquer beaucoup ou essentiellement dans -sa- vie professionnelle". Toutes générations confondues, sept personnes sondées sur dix affirment avoir "un rapport au travail qui connaît une barrière : la vie privée". Un basculement des mentalités que M. Buisson date d'il y a "un peu moins de dix ans, quand les entreprises ont mis en œuvre des phases lourdes de restructuration". Des phases d'autant plus mal vécues qu'elles ont touché des salariés qui avaient beaucoup donné à l'entreprise. "Dans les années 1970 et 1980, les sociétés ont demandé aux salariés non seulement de faire mais d'aimer leur travail, analyse Patrick Légeron, psychiatre et dirigeant du cabinet Stimulus. Puis est venu le temps des plans sociaux, dont même les salariés les plus dévoués ont été victimes. Les jeunes qui prennent de la distance, ce sont les enfants de ceux qui ont vécu ces bouleversements." Pour M. Légeron, auteur du livre Le stress au travail (Odile Jacob Poches, 2003), "nous assistons à un mouvement de balancier : après le surinvestissement, c'est le temps du recul, voire celui du désinvestissement". Un nouvel état d'esprit que résume Gilles Moutel, PDG de Chronopost : "Avant, l'équation était simple : si un salarié donnait beaucoup à une entreprise, elle le lui rendait. Maintenant, du fait de l'instabilité économique, les salariés doutent de la pérennité d'une telle équation." Les entreprises, qui continuent à utiliser des outils de management imaginés dans la seconde moitié du XXe siècle, se retrouvent dans une situation paradoxale. Elles doivent employer des personnes "qui ont du mal à croire dans les mots de l'entreprise" explique le docteur Marc Banet, médecin du travail chez Alcan-Pechiney. "Les sociétés ont beau afficher des chartes et des valeurs, les salariés y croient de moins en moins", renchérit le psychiatre Laurent Chneiweiss co-auteur de L'anxiété (Odile Jacob, 2004). "Les salariés restent pour la majorité d'excellents petits soldats, ajoute le Dr Banet. Mais les liens avec leur employeur se sont distendus. Ils se disent que s'ils ont une opportunité, ils s'en vont."
- Andreas Wittel on network sociality
"The term network sociality can be understood in contrast to ‘community’. Community entails stability, coherence, embeddedness, and belonging. It involves strong and long-lasting ties, proximity and a common history or narrative of the collective. Network sociality stands counterposed to Gemeinschaft. It does not represent belonging but integration and disintegration… In network sociality social relations are not ‘narrational’ but informational; they are not based on mutual experience or common history, but primarily on an exchange of data and on ‘catching up’. Narratives are characterised by duration, whereas information is defined by ephemerality. Network sociality consists of fleeting and transient, yet iterative social relations; of ephemeral but intense encounters. Narrative sociality often take place in bureaucratic organisations. In network sociality the social bond at work is not bureaucratic but informational; it is created on a project by project basis, by the movement of ideas, the establishment of solely temporary standards and protocols, and the creation and protection of proprietory information. Network sociality is not characterised by a separation but by a combination of both work and play. It is constructed on the grounds of communication and transport technology. Network…, I suggest a shift away from regimes of sociality in closed social systems and towards regimes of sociality in open social systems. Both communities and organisations are social systems with clear boundaries, with a highly defined inside and outside. Networks however are open social systems."
- A view on the hacker ethic by Richard Barbrook, in the "Manifesto for ‘Digital Artisans’
4. We will shape the new information technologies in our own interests. Although they were originally developed to reinforce hierarchical power, the full potential of the Net and computing can only be realised through our autonomous and creative labour. We will transform the machines of domination into the technologies of liberation.
9. For those of us who want to be truly creative in hypermedia and computing, the only practical solution is to become digital artisans. The rapid spread of personal computing and now the Net are the technological expressions of this desire for autonomous work. Escaping from the petty controls of the shopfloor and the office, we can rediscover the individual independence enjoyed by craftspeople during proto-industrialism. We rejoice in the privilege of becoming digital artisans.
10. We create virtual artefacts for money and for fun. We work both in the money-commodity economy and in the gift economy of the Net. When we take a contract, we are happy to earn enough to pay for our necessities and luxuries through our labours as digital artisans. At the same time, we also enjoy exercising our abilities for our own amusement and for the wider community. Whether working for money or for fun, we always take pride in our craft skills. We take pleasure in pushing the cultural and technical limits as far forward as possible. We are the pioneers of the modern.”
- On the necessity of open collaboration:
"The free sharing of information – in this case code as opposed to software development – has nothing to do with altruism or a specific anti-authoritarian social vision. It is motivated by the fact that in a complex collaborative process, it is effectively impossible to differentiate between the "raw material" that goes into a creative process and the "product" that comes out. Even the greatest innovators stand on the shoulders of giants. All new creations are built on previous creations and themselves provide inspiration for future ones. The ability to freely use and refine those previous creations increases the possibilities for future creativity."
- Proprietary vs. Open Source approaches
"This because for all its flaws, the open-source model has powerful advantages. The deepest and also most interesting of these advantages is that, to put it grossly, open source takes the bullshit out of software. It severely limits the possibility of proprietary "lock-in"—where users become hostage to the software vendors whose products they buy—and therefore eliminates incentives for vendors to employ the many tricks they traditionally use on each other and on their customers. The transparency inherent in the open-source model also limits secrecy and makes it harder to avoid accountability for shoddy work. People write code differently when they know the world is looking at it. Similarly, software companies behave differently when they know that customers who don't like a product can fix it themselves or switch to another provider. On the available evidence, it appears that the secrecy and maneuvering associated with the traditional proprietary software business generate enormous costs, inefficiencies, and resentment. Presented with an alternative, many people will leap at it." Charles Ferguson, Technology Review, June 2005, at http://technologyreview.com/articles/05/06/issue/feature_linux.asp
- Flexible involvement
"an often overlooked characteristic of open source collaboration is the flexible degree of involvement in and responsibility for the process that can be accommodated. The hurdle to participating in a project is extremely low. Valuable contributions can be as small as a single, one-time effort – a bug report, a penetrating comment in a discussion. Equally important, though, is the fact that contributions are not limited to just that. Many projects also have dedicated, full-time, often paid contributors who maintain core aspects of the system – such as maintainers of the kernel, editors of a slash site. Between these two extremes – one-time contribution and full-time dedication – all degrees of involvement are possible and useful. It is also easy to slide up or down the scale of commitment. Consequently, dedicated people assume responsibility when they invest time in the project, and lose it when they cease to be fully immersed. Hierarchies are fluid and merit-based, whatever merit means to the peers. This also makes it difficult for established members to continue to hold onto their positions when they stop making valuable contributions."
- Hackers are motivated by learning:
Programmers are interested in and motivated by personal development and the use value of the product, according to this survey: http://opensource.mit.edu/papers/lakhaniwolf.pdf
Why do people participate in open source projects, especially the less exciting 'mundane' tasks?, at http://web.mit.edu/evhippel/www/papers/opensource.PDF
- Eben Moglen on the marginal cost of reproducing information:
"The conversion to digital technology means that every work of utility or beauty, every computer program, every piece of music, every piece of visual or literary art, every piece of video, every useful piece of information—train schedule, university curriculum, map, chart—every piece of useful or beautiful information can be distributed to everybody at the same cost that it can be distributed to anybody. For the first time in human history, we face an economy in which the most important goods have zero marginal cost."
- a web-based 'open source based' industrial design project:
“ThinkCycle, is a Web-based industrial-design project that brings together engineers, designers, academics, and professionals from a variety of disciplines. Soon, some physicians and engineers were pitching in – vetting designs and recommending new paths. Within a few months, Prestero's team had turned the suggestions into an ingenious solution. Taking inspiration from a tool called a rotameter used in chemical engineering, the group crafted a new IV system that's intuitive to use, even for untrained workers. Remarkably, it costs about $1.25 to manufacture, making it ideal for mass deployment. Prestero is now in talks with a medical devices company; the new IV could be in the field a year from now. ThinkCycle's collaborative approach is modeled on a method that for more than a decade has been closely associated with software development: open source. It's called that because the collaboration is open to all and the source code is freely shared. Open source harnesses the distributive powers of the internet, parcels the work out to thousands, and uses their piecework to build a better whole – putting informal networks of volunteer coders in direct competition with big corporations. It works like an ant colony, where the collective intelligence of the network supersedes any single contributor. Open source, of course, is the magic behind Linux, the operating system that is transforming the software industry. Linux commands a growing share of the server market worldwide and even has Microsoft CEO Steve Ballmer warning of its "competitive challenge for us and for our entire industry." And open source software transcends Linux. Altogether, more than 65,000 collaborative software projects click along at Sourceforge.net, a clearinghouse for the open source community. The success of Linux alone has stunned the business world.”
- Open Source Biology
“open-source approaches have emerged in biotechnology already. The international effort to sequence the human genome, for instance, resembled an open-source initiative. It placed all the resulting data into the public domain rather than allow any participant to patent any of the results. Open source is also flourishing in bioinformatics, the field in which biology meets information technology. This involves performing biological research using supercomputers rather than test-tubes. Within the bioinformatics community, software code and databases are often swapped on “you share, I share” terms, for the greater good of all. Evidently the open-source approach works in biological-research tools and pre-competitive platform technologies. The question now is whether it will work further downstream, closer to the patient, where the development costs are greater and the potential benefits more direct. Open-source research could indeed, it seems, open up two areas in particular. The first is that of non-patentable compounds and drugs whose patents have expired. These receive very little attention from researchers, because there would be no way to protect (and so profit from) any discovery that was made about their effectiveness. To give an oft-quoted example, if aspirin cured cancer, no company would bother to do the trials to prove it, or go through the rigmarole of regulatory approval, since it could not patent the discovery. (In fact, it might be possible to apply for a process patent that covers a new method of treatment, but the broader point still stands.) Lots of potentially useful drugs could be sitting under researchers' noses.
The second area where open source might be able to help would be in developing treatments for diseases that afflict small numbers of people, such as Parkinson's disease, or are found mainly in poor countries, such as malaria. In such cases, there simply is not a large enough market of paying customers to justify the enormous expense of developing a new drug. America's Orphan Drug Act, which provides financial incentives to develop drugs for small numbers of patients, is one approach. But there is still plenty of room for improvement—which is where the open-source approach might have a valuable role to play."(http://www.economist.com/displaystory.cfm?story_id=2724420)
Open Source Biotechnology in Agriculture
"Researchers in Australia have devised a method of creating genetically modified crops that does not infringe on patents held by big biotechnology companies. The technique will be made available free to others to use and improve, as long as any improvements are also available free."
- Open Source Architecture
“This weblog has been created as a result of the article A communism of ideas, towards an architectural open source practice. It proposes a reorganization of architectural practice in order to deal with the diminshing role of the architect in spatial planning issues. Instead of continuing the battle of egos this weblog sets out to explore new models of cooperation that can reinvent architectural practice and develop innovative spatial models at the same time.”
More in the article, ‘towards an architectural open source practice, at http://www.archis.org/archis_old/english/archis_art_e_2003/art_3b_2003e.html and see also the blog http://www.suite75.net/blog/maze/. In the New York Times Magazine, David Brooks has written an interesting article describing the development of exurbia, a move beyond the suburbs that seems to exhibit P2P principles, see http://www.nytimes.com/2004/04/04/magazine/04EXURBAN.html?th .
The Open Source Metaverse
“Because he knows something about being at the whim of faceless decision-makers at profit-minded gaming companies, Ludlow is a big fan of an emerging concept in massively multiplayer online game circles: the open-source metaverse. Built by independent contributors, the open-source metaverse is an infinitely extensible virtual world with few rules and no oversight from corporate overlords. "Instead of the game being developed by a game corporation, it would be developed by multiple users donating time in sort of a wiki style," said Ludlow, a philosophy professor at the University of Michigan. "This is a different picture, one in which the games would emerge in a bottom-up kind of way. The structure wouldn't be dictated, but would emerge from numerous people trying to extend the game space." Ludlow acknowledges that his vision of a fully open-source virtual world is a couple of years off. But it's not total fantasy. There are already at least three groups implementing some form of open, metaverse-like platform: The Open Source Metaverse Project, or OSMP, the Croquet Project and MUPPETS. MUPPETS, or Multi-User Programming Pedagogy for Enhancing Traditional Study, is the brainchild of Andy Phelps, an assistant information technology professor at the Rochester Institute of Technology. He uses the project to immerse new students in their coursework even before they develop sophisticated programming skills.”
- Mental Transaction Costs, comment by Clay Shirky:
"The people pushing micropayments believe that the dollar cost of goods is the thing most responsible for deflecting readers from buying content, and that a reduction in price to micropayment levels will allow creators to begin charging for their work without deflecting readers. This strategy doesn't work, because the act of buying anything, even if the price is very small, creates what Nick Szabo calls mental transaction costs, the energy required to decide whether something is worth buying or not, regardless of price. The only business model that delivers money from sender to receiver with no mental transaction costs is theft, and in many ways, theft is the unspoken inspiration for micropayment systems. Like the salami slicing exploit in computer crime, micropayment believers imagine that such tiny amounts of money can be extracted from the user that they will not notice, while the overall volume will cause these payments to add up to something significant for the recipient. But of course the users do notice, because they are being asked to buy something. Mental transaction costs create a minimum level of inconvenience that cannot be removed simply by lowering the dollar cost of goods.
Additional quote from Clay Shirky, on the 'fame vs. fortune' dilemma
The fact that digital content can be distributed for no additional cost does not explain the huge number of creative people who make their work available for free. After all, they are still investing their time without being paid back. Why? The answer is simple: creators are not publishers, and putting the power to publish directly into their hands does not make them publishers. It makes them artists with printing presses. This matters because creative people crave attention in a way publishers do not. Prior to the internet, this didn't make much difference. The expense of publishing and distributing printed material is too great for it to be given away freely and in unlimited quantities – even vanity press books come with a price tag. Now, however, a single individual can serve an audience in the hundreds of thousands, as a hobby, with nary a publisher in sight. This disrupts the old equation of "fame and fortune." For an author to be famous, many people had to have read, and therefore paid for, his or her books. Fortune was a side-effect of attaining fame. Now, with the power to publish directly in their hands, many creative people face a dilemma they've never had before: fame vs fortune."
- Aaron Krowne on CBPP ‘authority models’
URL = http://www.freesoftwaremagazine.com/free_issues/issue_02/fud_based_encyclopedia/
- The arguments
by the owner-centric model advocates for Wikipedia are summarized here at
The article is very informative about the kinds of
problems that arise within P2P communities.
“Second problem: the dominance of difficult people, trolls, and their enablers. I stopped participating in Wikipedia when funding for my position ran out. That does not mean that I am merely mercenary; I might have continued to participate, were it not for a certain poisonous social or political atmosphere in the project. There are many ways to explain this problem, and I will start with just one. Far too much credence and respect accorded to people who in other internet contexts would be labelled "trolls." There is a certain mindset associated with unmoderated Usenet groups and mailing lists that infects the collectively-managed Wikipedia project: if you react strongly to trolling, that reflects poorly on you, not (necessarily) on the troll. If you attempt to take trolls to task or demand that something be done about constant disruption by trollish behavior, the other listmembers will cry "censorship," attack you, and even come to the defense of the troll. This drama has played out thousands of times over the years on unmoderated internet groups, and since about the fall of 2001 on the unmoderated Wikipedia. Wikipedia has, to its credit, done something about the most serious trolling and other kinds of abuse: there is an Arbitration Committee that provides a process whereby the most disruptive users of Wikipedia can be ejected from the project. But there are myriad abuses and problems that never make it to mediation, let alone arbitration. A few of the project's participants can be, not to put a nice word on it, pretty nasty. And this is tolerated. So, for any person who can and wants to work politely with well-meaning, rational, reasonably well-informed people—which is to say, to be sure, most people working on Wikipedia—the constant fighting can be so off-putting as to drive them away from the project. This explains why I am gone; it also explains why many others, including some extremely knowledgeable and helpful people, have left the project… The root problem: anti-elitism, or lack of respect for expertise. There is a deeper problem—or I, at least, regard it as a problem—which explains both of the above-elaborated problems. Namely, as a community, Wikipedia lacks the habit or tradition of respect for expertise. As a community, far from being elitist (which would, in this context, mean excluding the unwashed masses), it is anti-elitist (which, in this context, means that expertise is not accorded any special respect, and snubs and disrespect of expertise is tolerated)… Consequently, nearly everyone with much expertise but little patience will avoid editing Wikipedia, because they will—at least if they are editing articles on articles that are subject to any sort of controversy—be forced to defend their edits on article discussion pages against attacks by nonexperts."
- Town council vs. clique, a conflict in the early phases of Linux
"Alan Cox, a senior Linux developer, introduces two more organizational metaphors, the "town council" and the "clique," in an essay published on Slashdot in 1998. Cox provides a "guide to how to completely screw–up a free software development project" by describing the early days of the Linux on 8086 project, "one of the world’s most pointless exercises" and therefore one that has great "Hack Value." Cox writes that this obscurity meant that there were really only two or three people with both the skill and interest required, but also many "dangerously half-clued people with opinions — not code, opinions." These "half-clued" participants acted like a "town council" which created so much ineffectual noise that the core developers were lead to abandon the "bazaar model" and, using kill files, formed a "core team" which, Cox writes, "is a polite word for a clique." Cox argues that ignoring the "half-clued" was understandable but badly mistaken and ultimately caused the project to stall, not least because the real programmers were unable to help the wannabe programmers to learn to contribute usefully and thus change the unproductive "town council" into a productive project."
- Types of conflict
"Four types of conflicts can be identified in the FLOSS field: ideological, technical, personal and cultural conflicts. Of course, most actually existing conflicts are a mixture of these types.
Ideological conflicts are usually about differing ideas about and approaches to the meaning of freedom of software. The Gnome project, for example, was founded as a result of a conflict with KDE who used the then proprietary Qt library from Trolltech. It could be argued that the term Open Source also emerged as the result of such a conflict. A group of key people in the field — such as Eric Raymond — introduced the term to give the phenomenon a new name. They wanted to distance themselves from the FSF and its allegedly anti–commercial attitude. They hoped to popularise Open Source Software, as they now called Free Software, among wider groups of users, who, they felt, might have been alienated by the FSF’s value–conscious approach.
Technical conflicts are, evidently, about different views on the best solution for a given technical problem. Bearing in mind the strong role of technological perfection and code elegance, they have a huge conflict potential. Like the famous debate about macro- vs. micro-kernels, these conflicts are disputes about the best way to achieve a specified target.
Personal conflicts usually result from the behaviour of individuals who commit a breach of (social) norms. All discussions about how people should behave towards each other fall into this category. Also in this category was a case when a developer unjustifiedly claimed authorship.
The fourth and final type of conflict is best described as cultural. These are less about breaches of norms, and have more to do with the non-acceptance of norms. One such conflict has developed over the past few years following the wider spread of FLOSS; today, there are many users with only a very limited knowledge of computers. But it is not only their level of skills (which is lower), their attitude to computers is also quite different. They take a consumer’s view of FLOSS and have very different expectations vis-à-vis developers. They wait (or ask impatiently) for a specific feature instead of contributing to its development. They ask questions clearly answered in the documentation — both demands which are very similar to those made by purchasers of commercial products. This results in many conflicts between developers and users who are not part of the FLOSS (sub) culture, and to mutual alienation."
- The five
most common problems associated with using FLOSS software are described here at
- Some equipotentiality practices
"Projects also differ in whom they consider members, and the degree of membership within a given project can vary as well. Apart from officially assigned functions, such as being a member of the core team or a maintainer, writing access to Source Code Management Systems (SCM) is a distinguishing feature, as it allows contributors to work autonomously. Projects handle the granting of such rights very differently. Debian demands the successful completion of a series of tests to prove technical ability but also to show adherence to the Debian Social Contract — a kind of constitutional charter of the project which has a lot to say about freedom of software. Only when these tests have been passed satisfactorily — which can take a month or more than a year — is one assigned the official status of a Debian Developer. This form of admission — which is bordering on a formal initiation process — seems to be rather unique... It is widely assumed that the allocation and distribution of positions is based on reputation. Such reputation, though, is not only acquired meritocratically by writing good code; the idea of elders (where the project founder is assigned in some fashion the role of leader) is also quite important. The organisational structures of FLOSS projects are not designed at the drawing board; they are the result of happenstance, conventions ("that’s what is done in FLOSS projects"), and negotiation. "
- Reputation-based management schemes and the 'karma' systems
"NoLogo.org is perhaps the most prominent second-generation slash site. This makes it a good example of how the OSI experience, embodied by a specific code, is now at a stage where it can be replicated across different contexts with relative ease. NoLogo.org is based on the current, stable release of Slashcode, an open source software platform released under the GPL, and developed for and by the Slashdot community. Slashdot is the most well-known and obvious example of OSI, since it is one of the main news and discussion sites for the open source movement (www.slashdot.org).
Of particular importance for OSI is the collaborative moderation process supported by the code. Users who contribute good stories or comments on stories are rewarded with "karma," which is essentially a point system that enables people to build up their reputation. Once a user has accumulated a certain number of points, she can assume more responsibilities, and is even trusted to moderate other people's comments. Karma points have a half-life of about 72 hours. If a user stops contributing, their privileges expire. Each comment can be assigned points by several different moderators, and the final grade (from -1 to +5) is an average of all the moderators' judgments. A good contribution is one that receives high grades from multiple moderators. This creates a kind of double peer-review process. The first is the content of the discussion itself where people respond to one another, and the second is the unique ranking of each contribution. This approach to moderation addresses several problems that bedevil e-mail lists very elegantly. First, the moderation process is collaborative. No individual moderator can impose his or her preferences. Second, moderation means ranking, rather than deleting. Even comments ranked -1 can still be read. Third, users set their preferences individually, rather than allowing a moderator to set them for everyone. Some might enjoy the strange worlds of -1 comments, whereas others might only want to read the select few that garnered +5 rankings. Finally, involvement is reputation- (i.e. karma-) based and flexible. Since moderation is collaborative, it's possible to give out moderation privileges automatically. Moderators have very limited control over the system. As an additional layer of feedback, moderators who have accumulated even more points through consistently good work can "meta-moderate," or rank the other moderators."
Down and Out in the Magic Kingdom is a novel about a future society organized around such principles, and was written by Cory Doctorow in 2003.
- The importance of reputation-based systems
"suggest the potential utility of reputation services is far greater, touching nearly every aspect of society. By leveraging our limited and local human judgement power with collective networked filtering, it is possible to promote an interconnected ecology of socially beneficial reputation systems — to restrain the baser side of human nature, while unleashing positive social changes and enabling the realization of ever higher goals."
- Stephan Merten, of Oekonux.de, define the 'General Public License Society':
In every society based on exchange – which includes the former Soviet bloc – making money is the dominant aim. Because a GPL Society would not be based on exchange, there would be no need for money anymore. Instead of the abstract goal of maximizing profit, the human oriented goal of fulfilling the needs of individuals as well as of mankind as a whole would be the focus of all activities.
The increased communication possibilities of the internet will become even more important than today. An ever-increasing part of production and development will take place on the internet or will be based on it. The B2B (business to business) concept, which is about improving the information flow between businesses producing commodities, shows us that the integration of production in the field of information has just started. On the other hand the already visible phenomenon of people interested in a particular area finding each other on the internet will become central for the development of self-unfolding groups.
The difference between consumers and producers will vanish more and more. Already today the user can configure complex commodities like cars or furniture to some degree, which makes virtually each product an individual one, fully customized to the needs of the consumer. This increasing configurability of products is a result of the always increasing flexibility of the production machines. If this is combined with good software you could initiate the production of highly customized material goods allowing a maximum of self-unfolding – from your web browser up to the point of delivery.
Machines will become even more flexible. New type of machines available for some years now – fabbers are already more universal in some areas than modern industrial robots, not to mention stupid machines like a punch. The flexibility of the machines is a result of the fact that material production is increasingly based on information. At the same time the increasing flexibility of the machines gives the users more room for creativity and thus for self-unfolding.
In a GPL society there is no more reason for a competition beyond the type of competition we see in sports. Instead various kinds of fruitful cooperation will take place. You can see that today not only in Free Software but also (partly) in science and for instance in cooking recipes: Imagine your daily meal if cooking recipes would be proprietary and available only after paying a license fee instead of being the result of a world-wide cooperation of cooks."
- Resources on Edward Haskell:
Haskell's ideas are very well summarized by Timothy Wilken, see especially chapter five, in http://www.synearth.net/UCS2-Science-Order.pdf ; http://www.synearth.net/Order/UCS2-Science-Order.html
The full text of the book of Haskell's book Full Circle, can be read at http://www.kheper.net/topics/Unified_Science/index.html ; other relevant texts are:
The evolution of humanity, at http://www.synearth.net/Haskell/FC/FCCh4.htm
The basics explained, at http://futurepositive.synearth.net/2002/07/02
- The evolution of cooperation:
“Evolution's Arrow also argues that evolution itself has evolved. Evolution has progressively improved the ability of evolutionary mechanisms to discover the best adaptations. And it has discovered new and better mechanisms. The book looks at the evolution of pre-genetic, genetic, cultural, and supra-individual evolutionary mechanisms. And it shows that the genetic mechanism is not entirely blind and random. Evolution's Arrow goes on to use an understanding of the direction of evolution and of the mechanisms that drive it to identify the next great steps in the evolution of life on earth – the steps that humanity must take if we are to continue to be successful in evolutionary terms. It shows how we must change our societies to increase their scale and evolvability, and how we must change ourselves psychologically to become self-evolving organisms – organisms that are able to adapt in whatever ways are necessary for future evolutionary success, unfettered by their biological or social past. Two critical steps will be the emergence of a highly evolvable, unified and cooperative planetary organisation that is able to adapt as a coherent whole, and the emergence of evolutionary warriors – individuals who are conscious of the direction of evolution, and who use their evolutionary consciousness to promote and enhance the evolutionary success of humanity.”
- Free sharing as an aspect of civilisation-building:
"Le rapport gratuit est quand même très différent du rapport marchand, même si le rapport marchand aboutit toujours à un rapport non marchand, à l’usage: quand vous achetez un abricot, il n’est qu’une pure marchandise au moment où vous hésitez entre lui, la pêche ou la grappe de raisins, mais une fois que vous l’avez acheté et que vous le mangez, c’est votre capacité à apprécier son goût qui entre en jeu. La gratuité, c’est un saut de civilisation. A un moment donné, notre problème n’est plus de savoir si, oui ou non, notre enfant va aller à l’école, mais bien comment on va définir le rôle de l’éducation, assurer la réussite scolaire de chacun… Les interrogations gagnent en qualité, en ambition, elles créent du lien social. La société a montré qu’elle savait étendre le champ de la gratuité à des domaines qui n’étaient pas donnés au départ, qui n’étaient pas donnés par la nature, par exemple avec l’école publique ou la Sécurité sociale. Dès lors, il m’a semblé que faire reculer la frontière, identifier les lieux où on peut repousser la limite de ce qui est dominé par le marché et libérer des espaces du rapport marchand, c’était une possibilité très importante, très concrète, très immédiate. Cela ne renvoie pas à des lendemains ou des surlendemains qui chantent; ça peut se faire tout de suite et permettre ainsi d’expérimenter déjà une autre forme de rapport aux personnes et aux choses. La gratuité, rappelons-le, un bien vaut avant tout par son usage et n’a qu’accidentellement une valeur d’échange." (http://www.peripheries.net/g-sagot1.htm )
- Cooperation Studies and Cooperative Intelligence
Cooperation studies are well monitored by by Howard Rheingold and and a whole team of collaborators at the Smartmobs.com weblog.
Here's a summary of cooperation theories maintained by Paul Pivcevic of the http://www.cooperativeintelligence.org/ website. Below is a summary of some well-researched theories about how groups of people can evolve, true for any kind of organisation.
Key attributes in the stages of team/ group development
Forming Attempt at establishing primary purpose, structure, roles, leader, task & process relationships, and boundaries of the team.
Storming Arising and dealing of conflicts surrounding key questions from Forming stage
Norming Settling down of team dynamic and stepping into team norms and agreed ways of working
Performing Team is now ready & enabled to focus primarily on its task whilst attending to individual & team maintenance needs
In or Out Members decide whether they are part of the team or not
Top or Bottom Focus on who has power and authority within the team
Near or Far Finding levels of commitment and engagement within their roles
Preaffiliation Sense of unease, unsure of team engagement, which is superficial
Power and Control Focus on who has power and authority within the team Attempt to define roles
Intimacy Team begins to commit to task and engage with one another
Differentiation Ability to be clear about individual roles and interactions become workmanlike
Hill & Gruner (1973)
Orientation Structure sought
Exploration Exploration around team roles and relations
Production Clarity of team roles & team cohesion
Dependency Team members invest the leaders with all the power and authority
Fight or Flight Team members challenge the leaders or other members. Team members withdraw
Pairing Team members form pairings in an attempt to resolve their anxieties
Scott Peck (1990)
Pseudo community Members try and fake teamliness
Chaos Attempt to establish pecking order and team norms
Emptiness Giving up of expectations, assumptions and hope of achieving anything
Community Acceptance of each other and focus on the task
Adapted from Making Sense of Change Management, a Complete Guide to the Models, Tools, and Techniques of Organisational Change, Cameron, E and Green, M, Kogan Page, London. 2004
- Definitions: Collective intelligence, co-intelligence, groupthink, cognitive bias
"Tom Atlee is founder of The Co-Intelligence Institute coined the term co-intelligence, which he usually defines as meaning what intelligence looks like when we take seriously the wholeness, co-creativity and interconnectedness of life. Collective intelligence is only one manifestation of co-intelligence. Others include multi-modal intelligence, collaborative intelligence, wisdom, resonant intelligence and universal intelligence."
Groupthink is a term coined by psychologist Irving Janis in 1972 to describe one process by which a group can make bad or irrational decisions. In a groupthink situation, each member of the group attempts to conform his or her opinions to what they believe to be the consensus of the group. This results in a situation in which the group ultimately agrees on an action which each member might normally consider to be unwise.
..... " and individual cognitive bias Cognitive bias is any of a wide range of observer effects identified in cognitive science, including very basic statistical and memory errors that are common to all human beings (first identified by Amos Tversky and Daniel Kahneman) and drastically skew the reliability of anecdotal and legal evidence. They also significantly affect the scientific method which is deliberately designed to minimize such bias from any one observer.
- Holoptism defined, Jean-Francois Noubel
"Un espace "holoptique" : la proximité spatiale offre à chaque participant une perception complète et sans cesse réactualisée de ce Tout. Chacun, grâce à son expérience et expertise, s'y réfère pour anticiper ses actions, les ajuster et les coordonner avec celles les autres. Il existe donc un aller-retour incessant, qui fonctionne comme un miroir, entre les niveaux individuel et collectif. Nous nommerons holoptisme l'ensemble de ces propriétés, à savoir la transparence "horizontale" (perception des autres participants) à laquelle s'ajoute la communication "verticale" avec le Tout émergeant du collectif. Dans les exemples évoqués plus haut, les conditions de l'holoptisme sont fournies par l'espace 3D; ce sont nos sens et organes naturels qui servent directement d'interfaces. Notons que le rôle d'un coach, ou d'un observateur, consiste à favoriser la condition de l'holoptisme."
- Distinguishing insect swarming from human collective intelligence:
“Les sociétés d'insectes nous proposent un modèle de fonctionnement bien différent du modèle humain: un modèle décentralisé, fondé sur la coopération d'unités autonomes au comportement relativement simple et probabiliste, qui sont distribuées dans l'environnement et ne disposent que d'informations locales (je veux dire par là qu'elles ne disposent d'aucune représentation ou connaissance explicite de la structure globale qu'elles ont à produire ou dans laquelle elles évoluent, bref, qu'elles n'ont pas de plan). Les insectes possèdent un équipement sensoriel qui leur permet de répondre aux stimulations : celles qui sont émises par leurs congénères et celles qui proviennent de leur environnement. Ces stimulations n'équivalent évidemment pas à des mots ou à des signes ayant valeur symbolique. Leur signification dépend de leur intensité et du contexte dans lequel elles sont émises ; elles sont simplement attractives ou répulsives, inhibitrices ou activatrices. Dans les sociétés d'insectes, le "projet" global n’est donc pas programmé explicitement chez les individus, mais émerge de l’enchaînement d’un grand nombre d’interactions élémentaires entre individus, ou entre individus et environnement. Il y a en fait intelligence collective construite à partir de nombreuses simplicités individuelles.
L'intelligence en essaim est "aveugle" du fait de son absence d'holoptisme ; aucun des individus n'a une quelconque idée de ce qu'est l'entité émergente. Ce qui "stabilise" et dirige les sociétés d'insectes sociaux, ce sont en grande partie les conditions extérieures (température, météo, dangers, nourriture…), qui servent de contenant naturel et indiquent la voie à suivre. Il a fallu des millions d'années d'évolution pour que s'affine la panoplie comportementale des individus ("programmée" génétiquement), et que ces sociétés atteignent la stabilité et la robustesse que nous leur connaissons." From Jean-Francois Noubel, quoting from Jean-Louis Deneubourg, biology professor of the Brussels-based ULB university, an expert on social insects, at http://www.thetransitioner.org/ic
- Swarming, the market, collective intelligence and P2P, note by Jean-Francois Noubel
The following citation helps distinguishing between bottom-up processes in general, such as the market, and true collective intelligence and P2P processes, which require holoptim. This citation can also be read in conjunction with the section outlining the differences between the market and P2P.
"Il semblerait que, chez l'humain, une forme d'intelligence en essaim se manifeste également dans le domaine de l'économie. A chaque fois que nous effectuons un paiement, nous engageons un geste assez similaire, dans sa simplicité et sa dynamique, à celui d'un échange entre deux insectes sociaux. De la multitude de transactions simples et probabilistes d'individu à individu émerge un système collectif très élaboré, pourvu de propriétés adaptatives et réactives à l'environnement. C'est ainsi que la société humaine depuis longtemps gère et équilibre ses ressources au niveau macroscopique (alors qu'au niveau local d'une organisation, c'est l'intelligence pyramidale qui en organise la circulation, comme nous l'avons évoqué plus haut).
Limites de l'intelligence en essaim: L'intelligence en essaim fonctionne à cette condition qu'il y a uniformité et désindividuation de ses agents. Ces derniers, anonymes parmi la foultitude des d'autres agents anonymes, y sont facilement sacrifiés – même à grande échelle – au nom de l'équilibre global du système. Si ce fait semble acceptable pour les insectes sociaux dont chaque individu est indifférencié, il ne l'est évidemment pas pour les espèces animales dont l'équilibre repose précisément sur la différentiation des individus, en particulier chez l'Homme. Cette distinction fondamentale semble pourtant ignorée par les nombreuses théories économiques, qui fondent leurs modèles et doctrines sur des interactions d'agents indifférenciés (le consommateur, le citoyen). L'approche libérale postule que le système doit trouver tout seul son équilibre au niveau macroscopique, grâce au jeu des contraintes internes et externes (d'aucuns se réfèrent à la fameuse expression d'Adam Smith de la main invisible). Modéliser la société humaine comme une somme d'agents indifférenciés – même avec des variations aléatoires de comportements – constitue au mieux une erreur épistémologique, au pire une doctrine fort dangereuse."
- Stigmergy defined
"Stigmergy is a term used in biology (from the work of french biologist Pierre-Paul Grasse) to describe environmental mechanisms for coordinating the work of independent actors (for example, ants use pheromones to create trails and people use weblog links to establish information paths, for others to follow). The term is derived from the greek words stigma ("sign") and ergon ("to act"). Stigmergy can be used as a mechanism to understand underlying patterns in swarming activity."
(Global Guerilla weblog)
- Towards 'sufficient' and 'repersonalised' money systems, quote by Keith Hart
Clearly, a radical monetary reform is going to be at the heart of the problematic for creating a P2P-based society. Instead of the present situation where only 10% of the financial supply reaches those who need it, with the larger parts of the world excluded from its circuits, we need a monetary format that empowers bottom-up development. Today, we have the paradoxical situation of a financial system which is overabundant for those who don't need it, and scarce in those parts of the world really needing it. Reading Money in an Unequal World. New York and London: Texere, 2001 by Keith Hart is a good place to start explorations in monetary reform.
“Money is the problem, but it is also the solution. We have to find ways of organising markets as equal exchange and that means detaching the forms of money from the capitalist institutions which currently define them. I believe that, instead of taking money to be something scarce beyond our control, we could begin to make it ourselves as a means of accounting for those exchanges whose outcomes we wish to calculate. Money would then become multiple sources of personal credit, building on the technology which has already given us plastic cards. The key to repersonalisation of the economy is cheap information. Money was previously impersonal because objects exchanged at distance needed to be detached from the parties involved. Now growing amounts of information can be attached to transactions involving people anywhere in the world. This provides the opportunity for us to make circuits of exchange employing money forms which reflect our individuality, so that money may be more meaningful to each of us as a means of participating in the multiple associations we choose to enter. All of this stands in stark contrast to state-made money in the 20th century, where citizens belonged to one national economy whose currency was monopolised by a political class claiming the authority of representation to manage its volume, price and allocation.”
The Open Money project
“Open money is a means of exchange freely available to all. Any community, any association – indeed, any body – can have their own money. Open money is synonymous with LETS – an invitation to come inside and play, as in open door and open house; collaboration as in open hand and open for all; attitude as in open mind. The purpose of the open money project is to bring together and organize the people and resources necessary for the development and propagation of open money everywhere. The open money project is a work in progress – a continuation of almost 20 years of LETSystem development all over the world, two community way projects in Canada using smart cards, the Japan open money project, and, most recently, a community currencies server program, cybercredits. The intent is to develop an open money kernel – a core set of text files, administration tools and software systems that are sufficiently coherent and clear that further elaboration of the set derives from the core concepts themselves, rather than from the particular agendas of the originating writers and contributors. The open money kernel is to have a life of its own. '
Some other complementary currency initiatives
LIBRA project (Milan, Italy), http://www.aequilibra.it/; Banca Etica (Padova, Italy), http://www.bancaetica.com/; Chiemgauer (Bavaria, Germany), http://www.chiemgauer.info/ ; WIR Bank (Switzerland), http://www.wir.ch/
Learning about monetary reform:
Dr. Margrit Kennedy at http://www.margritkennedy.de/ . one of the leading figures in this field as she published "Interest and Inflation-free Money"(the whole text is available in English at: http://userpage.fu-berlin.de/~roehrigw/kennedy/english/)
A page devoted to ‘alternative economy’ topics, also listing the alternative currencies in Japan, at http://www3.plala.or.jp/mig/econ-uk.html, and on Argentina’s RGT, the world’s biggest non-money barter network
- A listing of technologically-supported collaborative methodologies can be found at
- David Weinberger, on why classification can be different in digital environments:
"In the physical world, a fruit can hang from only one branch. In the digital world, objects can easily be classified in dozens or even hundreds of different categories; In the real world, multiple people use any one tree. In the digital world, there can be a different tree for each person. In the real world, the person who owns the information generally also owns and controls the tree that organizes that information. In the digital world, users can control the organization of information owned by others." (David Weinberger in Release 1.0.: http://www.release1-0.com/, reproduced in JOHO blogf)
- Some of the sites that pioneered tagging are
- Broad vs. narrow folksonomies
"Vander Wal [argues that], there are broad folksonomies and narrow folksonomies, and they are entirely distinct. "Delicious is a broad folksonomy, where a lot of people are describing one object," Vander Wal said. "You might have 200 people giving a set of tags to one object, which really gives a lot of depth.... No matter what you call something, you probably will be able to get back to that object." In a broad folksonomy, Vander Wal continued, there is the benefit of the network effect and the power curve because so many people are involved. An example is the website of contemporary design magazine Moco Loco, to which 166 Delicious users had applied the tag "design." Conversely, Vander Wal explained, Flickr's system is a narrow folksonomy, because rather than many people tagging the same communal items, as with Delicious, small numbers of users tag individual items. Thus many users tag items, but of those, only a small number will tag a particular item. "
Explaining and showing broad and narrow folksonomies / Thomas Vander Wal – <http://www.personalinfocloud.com/2005/02/explaining_and_.html> : February 21, 2005
- Clay Shirky on tagging vs. metadata
"This is something the 'well-designed metadata' crowd has never understood – just because it's better to have well-designed metadata along one axis does not mean that it is better along all axes, and the axis of cost, in particular, will trump any other advantage as it grows larger. And the cost of tagging large systems rigorously is crippling, so fantasies of using controlled metadata in environments like Flickr are really fantasies of users suddenly deciding to become disciples of information architecture."
(cited by Cory Doctorow in the Boing Boing blog, January 2005)
When Folknomies make sense
"Taxonomies are suitable for classifying corpora of homogeneous, stable, restricted entities with a central authority and expert or trained users, but are also expensive to build and maintain. Faceted systems (a sort of polyhierarchy) are useful with a wide range of users with different mental models and vocabularies. They are also more scalable because new items (for users) and new concepts (for cataloguers) can be added with a limited impact and with no need to start a new classification from scratch. Folksonomies require people to do the work by themselves for personal or social reasons. They are flat and ambiguous and cannot support a targeted search approach. However, they are also inexpensive, scalable and near to the language and mental model of users."
Some sources on Folksonomies
Folksonomy / Alex Wright – <http://www.agwright.com/blog/archives/000900.html> : January 5, 2005 Folksonomy (Wikipedia) – <http://en.wikipedia.org/wiki/Folksonomy> Social bookmarking tools / T Hammond, T Hannay, B Lund, J Scott – <http://www.dlib.org/dlib/april05/hammond/04hammond.html/> : April 2005
Tagging explained by Business Week at http://www.businessweek.com/magazine/content/05_15/b3928112_mz063.htm
- The power of categorization
Sorting Things Out, by communications theorists Geoffrey C. Bowker and Susan Leigh Star (The MIT Press, 2000), covers a lot of conceptual ground in this context: "After arguing that categorization is both strongly influenced by and a powerful reinforcer of ideology, it follows that revolutions (political or scientific) must change the way things are sorted in order to throw over the old system. Who knew that such simple, basic elements of thought could have such far-reaching consequences?"
(Rob Lightner in a Amazon.com review)
- Connectivist learning theory, by George Siemens
"A central tenet of most learning theories is that learning occurs inside a person. Even social constructivist views, which hold that learning is a socially enacted process, promotes the principality of the individual (and her/his physical presence – i.e. brain-based) in learning. These theories do not address learning that occurs outside of people (i.e. learning that is stored and manipulated by technology)… In a networked world, the very manner of information that we acquire is worth exploring. The need to evaluate the worthiness of learning something is a meta-skill that is applied before learning itself begins. When knowledge is subject to paucity, the process of assessing worthiness is assumed to be intrinsic to learning. When knowledge is abundant, the rapid evaluation of knowledge is important. The ability to synthesize and recognize connections and patterns is a valuable skill. Including technology and connection making as learning activities begins to move learning theories into a digital age. We can no longer personally experience and acquire learning that we need to act. We derive our competence from forming connections. Karen Stephenson states: “Experience has long been considered the best teacher of knowledge. Since we cannot experience everything, other people’s experiences, and hence other people, become the surrogate for knowledge. ‘I store my knowledge in my friends’ is an axiom for collecting knowledge through collecting people.
Connectivism is the integration of principles explored by chaos, network, and complexity and self-organization theories…
Principles of connectivism:
- Learning and knowledge rests in diversity of opinions.
- Learning is a process of connecting specialized nodes or information sources.
- Learning may reside in non-human appliances.
- Capacity to know more is more critical than what is currently known
- Nurturing and maintaining connections is needed to facilitate continual learning.
- Ability to see connections between fields, ideas, and concepts is a core skill.
- Currency (accurate, up-to-date knowledge) is the intent of all connectivist learning activities.
- Communal learning: some preliminary sources to explore this topic:
I believe Pierre Levy has done some valuable work into recognizing the processes of communal validation but have not yet located the precise works.
Jack Whitehead, who explores "Living Education Theories" says that he has "been using a peer-to-peer process of social validation (modified from Habermas' views in his work on communication and the evolution of society) in assisting individuals to create their own living educational theories as they account to themselves and others for the lives they are living and their learning as they seen to live their values as fully as they can."
(see http://www.actionresearch.net and http://www.bath.ac.uk/~edsajw/living.shtml)
Alan Rayner has investigated 'inclusional' or 'empathic' learning, at http://www.bath.ac.uk/~bssadmr/inclusionality/rehumanizing.htm
- the difference between peer to peer processes and academic peer review:
“One of the early precedents of open source intelligence is the process of academic peer review. As academia established a long time ago, in the absence of fixed and absolute authorities, knowledge has to be established through the tentative process of consensus building. At the core of this process is peer review, the practice of peers evaluating each other's work, rather than relying on external judges. The specifics of the reviewing process are variable, depending on the discipline, but the basic principle is universal. Consensus cannot be imposed, it has to be reached. Dissenting voices cannot be silenced, except through the arduous process of social stigmatization. Of course, not all peers are really equal, not all voices carry the same weight. The opinions of those people to whom high reputation has been assigned by their peers carry more weight. Since reputation must be accumulated over time, these authoritative voices tend to come from established members of the group. This gives the practice of peer review an inherently conservative tendency, particularly when access to the peer group is strictly policed, as it is the case in academia, where diplomas and appointments are necessary to enter the elite circle. The point is that the authority held by some members of the group – which can, at times, distort the consensus-building process – is attributed to them by the group, therefore it cannot be maintained against the will of the other group members."
- The University of Openness' distributed library project
"Unfortunately, the traditional library system doesn't do much to foster community. Patrons come and go, but there is very little opportunity to establish relationships with people or groups of people. In fact, if you try to talk with someone holding a book you like – you'll probably get shushed. The Distributed Library Project works in exactly the opposite way, where the very function of the library depends on interaction. How it Works: Users create accounts complete with bios and interest enumerations, then list the books and videos that they own. Those users are then free to browse the books that others have listed – sorted by proximity, interest, and book commonality. If a book or video is available, a user can check it out directly from the owner. There is an ebay-style feedback system for managing trust – users who return books on time get positive feedback, while users who damage books or return them late get negative feedback. These points create an overall “score” that lenders can use to judge the trustworthiness of a borrower.
Moxie, a Californian hacker and anarchist wrote a piece of software to catalogue and share books in his community. Since 2003 many small libraries have started using this (and related) pieces of software to catalogue their books and provide their communities with a system for sharing, lending and reviewing their collections of books, videos and music. There are now over 20 Distributed Library Project servers around the world. Using this as a starting point, the Antisystemic Library is starting to develop this software to allow people to archive and provide access to their collections of zines, maps, books, media and other resources. The next stage of development will be the publication of these archives on the Semantic Web, along with their interconnected cataloguing systems."
- The Anti-systemic Library project of the University of Openness
"The principals of an anti-systemic library is that it does not have a catalogue, i.e. a hierarchical organisation of knowledge, instead it allows each library, each archivist and each researcher to use their own archiving and searching systems, based on their own bibliographies, languages, interests, politics and codes. The libraries that use these principles considered as a whole can be called 'The Anti-systemic Library'.
The Semantic Web initiative is attempting to produce an information network with 'enriched' semantic coherence, while at the same time allowing local information to be described and enhanced locally. For example, describing my book collection, I use the category 'fascist propaganda' and someone else uses 'nazi counter-propaganda', or a word in a non-english language that means something similar. If we both use a computer readable syntax to describe our collections, we can programme a robot to link our libraries together. This robot would be able to read all our catalogues and infer that since we all have a number of identical books in these categories, that there is a semantic connection between fascist propaganda, nazi propaganda and the non-english word – and that the collections might be usefully grouped together."
- The characteristics of chaordic organizations
The chaordic commons is a network infrastructure created to support P2P-like initiatives, created by Dee Hock, the former chairman of Visa International and author of The Chaordic Age. Here are the principles behind the movement.
- Are based on clarity of shared purpose and principles.
- Are self-organizing and self-governing in whole and in part.
- Exist primarily to enable their constituent parts.
- Are powered from the periphery, unified from the core.
- Are durable in purpose and principle, malleable in form and function.
- Equitably distribute power, rights, responsibility and rewards.
- Harmoniously combine cooperation and competition.
- Learn, adapt and innovate in ever expanding cycles.
- Are compatible with the human spirit and the biosphere.
- Liberate and amplify ingenuity, initiative and judgment.
- Are compatible with and foster diversity, complexity and change.
- Constructively utilize and harmonize conflict and paradox.
- Restrain and appropriately embed command and control methods.
- John Holloway on the new temporality of change
"Time is central to any consideration of power and counter power or anti-power. The traditional left is centred on waiting, on patience. The social democratic parties tell us “Wait until the next election, then we will come to power and things will be different” The Leninist parties say “wait for the revolution, then we’ll take power and life will begin”. But we cannot wait. Capitalism is destroying the world and we cannot be patient. We cannot wait for the next long wave or the next revolutionary opportunity. We cannot wait until the time is right. We must revolt now, we must live now.
The traditional left operates with a capitalist concept of time. In this concept, capitalism is a continuum, it has a duration, it will be there until the day of revolution comes. It is this duration, this continuum that we have to break. How? By refusing. By understanding that capitalism does not have any duration independent of us. If capitalism exists today, it is not because it was created one hundred or two hundred years ago, but because we (the workers of the world, in the broadest sense) created it today. If we do not create it tomorrow, it will not exist. Capital depends on us for its existence, from one moment to the next. Capital depends on converting our doing into alienated work, on converting our life into survival. We make capitalism. The problem of revolution is not to abolish capitalism but to stop making it.
But there is also a second temporality. To give force to our refusal, we have to back it up with the construction of an alternative world. If we refuse to submit to capital, we must have some alternative way of living and this means the patient creation of other ways of organising our activity, our doing.
If the first temporality is that of innocence, this is the temporality of experience. This is the temporality of building our own power, our power-to, our power to do things in a different way. Building our own power-to is a very different thing from taking power or seizing power. If we organise ourselves to take power, to try to win state power, then inevitably we put ourselves into the logic of capitalist power, we adopt capitalist forms of organisation which impose separations, separations between leaders and masses, between citizens and foreigners, between public and private. If we focus on the state and the winning of state power, then inevitably we reproduce within our own struggles the power of capital. Building our own power-to involves different forms of organisation, forms which are not symmetrical to capital’s forms, forms which do not separate and exclude. Our power, then, is not just a counter-power, it is not a mirror-image of capitalist power, but an anti-power, a power with a completely different logic — and a different temporality.
The traditional temporality, the temporality of taking power, is in two steps: first wait and build the party, then there will be the revolution and suddenly everything will be different. The second temporality comes after the first one. The taking of power operates as a pivot, a breaking point in the temporality of the revolutionary process. Our temporality, the temporality of building our own anti-power is also in two steps, but the steps are exactly the opposite, and they are simultaneous. First: do not wait, refuse now, tear a hole, a fissure in the texture of capitalist domination now, today. And secondly, starting from these refusals, these fissures, and simultaneously with them, build an alternative world, a different way of doing things, a different sort of social relations between people. Here it cannot be a sudden change, but a long and patient struggle in which hope lies not in the next election or in the storming of the Winter Palace but in overcoming our isolation and coming together with other projects, other refusals pushing in the same direction. This means not just living despite capitalism, but living in-against-and-beyond capitalism. It means an interstitial conception of revolution."
- Johan Soderbergh on the gift economy:
"On the question if peer-to-peer is a gift economy, I take a slightly different viewpoint on what archaic gift economy is really about. In my mind, when discussed on the internet, the focus has wrongly been on gift economy as an inversion of the logic of market economy, where accumulation of capital is simply replaced with accumulation of moral debt. My reading of Marcel Mauss and Levi-Strauss is that gift economy is not primarily about allocating resources. Usually, tribal people are self-sustaining in life-supportive goods and gift swapping are restricted to a particular class of goods, tokens such as clams and jewelry. The real importance of gift is to strike aliances between giver and receiver. Both of them are winners, to put it pointingly, the loser is the third part who was left out from the exchange. Hence, I think the gift economy parallel is valid in parts of the virtual community, where aliances and communal bonding is key, and not valid in other parts, where relations are completely impersonal." (personal communication, March 2005)
John Frow on the gift economy, citing Gregory:
"a gift economy depends upon the creation of debt, where what is at stake is not the things themselves or the possibility of material profit but the personal relationships that are formed and perpetuated by ongoing indebtedness. Things in the gift economy are the vehicles, the effective mediators and generators, of social bonds: putting this in terms derived from Marx's theory of commodity fetishism, Gregory writes that `things and people assume the social form of objects in a commodity economy while they assume the social form of persons in a gift economy'."
Recall the schematic opposition that Gregory sets up between two modes of exchange:
(source: personal communication, Word manuscript of chapter 2 of xxx)
- The original Tragedy of the Commons essay, at http://dieoff.org/page95.htm
- Participation Capture in Bittorrent
"BitTorrent is a radical advance over the peer-to-peer systems which preceded it. Cohen realized that popularity is a good thing, and designed BitTorrent to take advantage of it. When a file (movie, music, computer program, it's all just bits) is published on BitTorrent, everyone who wants the file is required to share what they have with everyone else. As you're downloading the file, those parts you've already downloaded are available to other people looking to download the file. This means that you’re not just "leeching" the file, taking without giving back; you're also sharing the file with anyone else who wants it. As more people download the file, they offer up what they’ve downloaded, and so on. As this process rolls on, there are always more and more computers to download the file from. If a file gets very popular, you might be getting bits of it from hundreds of different computers, all over the Internet – simultaneously. This is a very important point, because it means that as BitTorrent files grow in popularity, they become progressively faster to download. Popularity isn’t a scourge in BitTorrent – it's a blessing."
- Participation Capture, Sousveillance, Panoptical surveillance
Sousveillance is the conscious capture of processes from below, by individual participants; surveillance is from the top down, while participation capture is inscribed in the very protocols of cooperation and is therefore automatic:
"Surveiller veut dire veiller par au-dessus. On fait ici référence à l'oeuvre de Michel Foucault? Surveiller et Punir, où est relaté le principe du panoptisme, architecture des prisons modernes qui permettent à une seule personne de tout voir depuis un point central. C'est donc un concept à la fois physique, hiérarchique et spirituelle. Souveillance indique implicitement le contraire, c'est-à-dire veiller par en-dessous (voir article smartmobs (anglais)). La sousveillance est l'art, la science et la technologie de la capture (mise en mémoire) de l'expérience personnelle. Elle implique le processing, l'archivage, l'indexation, la transmission d'enregistrements audiovisuels par le moyen de prothèses cybernétiques telles que des assistants à la vision, à la mémoire visuelle, etc. Les problématiques légales, éthiques, réglementaires impliquées dans la sousveillance sont encore à explorer. Considérons cependant un exemple tel que celui de l'enregistrement d'une conversation téléphonique. Lorsqu'une ou plusieurs des parties concernées enregistrent la conversation, on appelle cela la sousveillance alors que lorsque la même conversation est enregistrée par une entité externe (ex : les renseignements généraux enregistrant une conversation confidentielle entre un avocat et son client), on appelle cela de la «surveillance». La surveillance audio est autorisée dans la plupart des Etats alors que la sousveillance ne l'est pas."
- Clay Shirky on Flaming as a Tragedy of theCommons
"Flaming is one of a class of economic problems known as The Tragedy of the Commons. Briefly stated, the tragedy of the commons occurs when a group holds a resource, but each of the individual members has an incentive to overuse it. (The original essay used the illustration of shepherds with common pasture. The group as a whole has an incentive to maintain the long-term viability of the commons, but with each individual having an incentive to overgraze, to maximize the value they can extract from the communal resource.) In the case of mailing lists (and, again, other shared conversational spaces), the commonly held resource is communal attention. The group as a whole has an incentive to keep the signal-to-noise ratio low and the conversation informative, even when contentious. Individual users, though, have an incentive to maximize expression of their point of view, as well as maximizing the amount of communal attention they receive. It is a deep curiosity of the human condition that people often find negative attention more satisfying than inattention, and the larger the group, the likelier someone is to act out to get that sort of attention."
More by Shirky on the group 'as its own worst enemy', at http://shirky.com/writings/group_enemy.html; the highly recommended Shirky archive is at http://shirky.com
- A local mom and pop store did not need to grow,
it merely needs to provide sustenance for the family.
But financial capital must grow, and a company must grow 'faster' or
at least the average of its sector, otherwise it collapses.
- How the market's swarm intelligence differs from P2P processes, Jean Francois Noubel:
Limites de l'intelligence en essaim: L'intelligence en essaim fonctionne à cette condition qu'il y a uniformité et désindividuation de ses agents. Ces derniers, anonymes parmi la foultitude des d'autres agents anonymes, y sont facilement sacrifiés – même à grande échelle – au nom de l'équilibre global du système. Si ce fait semble acceptable pour les insectes sociaux dont chaque individu est indifférencié, il ne l'est évidemment pas pour les espèces animales dont l'équilibre repose précisément sur la différentiation des individus, en particulier chez l'Homme. Cette distinction fondamentale semble pourtant ignorée par les nombreuses théories économiques, qui fondent leurs modèles et doctrines sur des interactions d'agents indifférenciés (le consommateur, le citoyen). L'approche libérale postule que le système doit trouver tout seul son équilibre au niveau macroscopique, grâce au jeu des contraintes internes et externes (d'aucuns se réfèrent à la fameuse expression d'Adam Smith de la main invisible). Modéliser la société humaine comme une somme d'agents indifférenciés – même avec des variations aléatoires de comportements – constitue au mieux une erreur épistémologique, au pire une doctrine fort dangereuse."
- Bill Gates on the copyright 'communists', in C:Net:
"C:Net: In recent years, there's been a lot of people clamoring to reform and restrict intellectual-property rights. It started out with just a few people, but now there are a bunch of advocates saying, "We've got to look at patents, we've got to look at copyrights." What's driving this, and do you think intellectual-property laws need to be reformed?
No, I'd say that of the world's economies, there's more that believe in intellectual property today than ever. There are fewer communists in the world today than there were. There are some new modern-day sort of communists who want to get rid of the incentive for musicians and moviemakers and software makers under various guises. They don't think that those incentives should exist. And this debate will always be there. I'd be the first to say that the patent system can always be tuned—including the U.S. patent system. There are some goals to cap some reform elements. But the idea that the United States has led in creating companies, creating jobs, because we've had the best intellectual-property system—there's no doubt about that in my mind, and when people say they want to be the most competitive economy, they've got to have the incentive system. Intellectual property is the incentive system for the products of the future."
(http://news.com.com/Gates+taking+a+seat+in+your+den/2008-1041_3-5514121.html?tag=nefd.ac ; Liberation summarises the furor at http://www.liberation.fr/page.php?Article=267076)
- A free-market advocate on the merits of dot.communism:
"Left-leaning intellectuals have long worried about the way in which our public space – shopping malls, city centres, urban parks, etc. – have become increasingly private. Other liberals, like writer Mickey Kaus, have emphasised the dangers to civic life of pervasive economic inequality. But the web has provided small answers to both these conundrums. As our public life has shrunk in reality, it has expanded exponentially online. Acting as a critical counter-ballast to market culture, the web has made interactions between random, equal citizens, far more possible than ever before." (http://www.andrewsullivan.com/text/hits_article.html?9,culture)
- Markets without Capitalism?
Silvio Gesell is one of the main thinkers of this tradition. Gesell was briefly finance minister in Karl Liebknecht’s German-soviet republic and was greatly appreciated in his time by figures as Keynes and Martin Buber.
“In 1891 Silvio Gesell (1862-1930) a German-born entrepreneur living in Buenos Aires published a short booklet entitled Die Reformation im Münzwesen als Brücke zum sozialen Staat (Currency Reform as a Bridge to the Social State), the first of a series of pamphlets presenting a critical examination of the monetary system. It laid the foundation for an extensive body of writing inquiring into the causes of social problems and suggesting practical reform measures. His experiences during an economic crisis at that time in Argentina led Gesell to a viewpoint substantially at odds with the Marxist analysis of the social question: the exploitation of human labour does not have its origins in the private ownership of the means of production, but rather occurs primarily in the sphere of distribution due to structural defects in the monetary system. Like the ancient Greek philosopher Aristotle, Gesell recognised money's contradictory dual role as a medium of exchange for facilitating economic activity on the one hand and as an instrument of power capable of dominating the market on the other hand. The starting point for Gesell's investigations was the following question: How could money's characteristics as a usurious instrument of power be overcome, without eliminating its positive qualities as a neutral medium of exchange? He attributed this market-dominating power to two fundamental characteristics of conventional money: Firstly, money as a medium of demand is capable of being hoarded in contrast to human labour or goods and services on the supply side of the economic equation. It can be temporarily withheld from the market for speculative purposes without its holder being exposed to significant losses. Secondly, money enjoys the advantage of superior liquidity to goods and services. In other words, it can be put into use at almost any time or place and so enjoys a flexibility of deployment similar to that of a joker in a card game.
Gesell's theory of a Free Economy based on land and monetary reform may be understood a reaction both to the laissez-faire principle of classical liberalism as well as to Marxist visions of a centrally planned economy. It should not be thought of as a third way between capitalism or communism in the sense of subsequent "convergence theories" or so-called "mixed economy" models, i.e. capitalist market economies with global state supervision, but rather as an alternative beyond hitherto realized economic systems. In political terms it may be characterised as "a market economy without capitalism"…Gesell's alternative economic model is related to the liberal socialism of the cultural philosopher Gustav Landauer (1870-1919) who was also influenced by Proudhon and who for his part strongly influenced Martin Buber (1878-1965). There are intellectual parallels to the liberal socialism of the physician and sociologist Franz Oppenheimer (1861-1943) and to the social philosophy of Rudolf Steiner (1861-1925), the founder of the anthroposophic movement … An association called Christen für gerechte Wirtschaftsordnung (Christians for a Just Economic Order) promotes the study of land and monetary reform theories in the light of Jewish, Christian and Islamic religious doctrines critical of land speculation and the taking of interest. Margrit Kennedy, Helmut Creutz and other authors have examined the contemporary relevance of Gesell's economic model and tried to bring his ideas up to date." (http://userpage.fu-berlin.de/~roehrigw/onken/engl.htm)
Books to explore this tradition:
Silvio Gesell, The Natural Economic Order (translation by Philip Pye). London: Peter Owen Ltd., 1958.
Dudley Dillard, Proudhon, Gesell and Keynes – An Investigation of some „Anti-Marxian-Socialist“ Antecedents of Keynes’ General Theory, University of California: Dr.-Thesis, 1949. Hackbarth Verlag St.Georgen/Germany 1997. ISBN 3-929741-14-8.
Leonard Wise, Great Money Reformers – Silvio Gesell, Arthur Kitson, Frederic Soddy. London: Holborn Publishing, 1949.
International Association for a Natural Economic Order, The Future of Economy – A Memoir for Economists. Lütjenburg: Fachverlag für Sozialökonomie, 1984/1989. (P.O. Box 1320, D-24319 Lütjenburg)
Margrit Kennedy, Interest and Inflation Free Money – Creating an Exchange Medium That Works for Everybody and Protects the Earth. Okemos/Michigan, 1995.
- Optimal usage of sharing principles vs. market economies, by Yochai Benkler
"The paper offers a framework to explain large scale effective practices of sharing private, excludable goods. It starts with case studies of distributed computing and carpooling as motivating problems. It then suggests a definition for “shareable goods” as goods that are lumpy and mid-grained in size, and explains why goods with these characteristics will have systematic overcapacity relative to the requirements of their owners. The paper then uses comparative transaction costs analysis, focused on information characteristics in particular, combined with an analysis of diversity of motivations, to suggest when social sharing will be better than secondary markets to reallocate this overcapacity to non-owners who require the functionality. The paper concludes with broader observations about the role of sharing as a modality of economic production as compared to markets and hierarchies (whether states or firms), with a particular emphasis on sharing practices among individuals who are strangers or weakly related, its relationship to technological change, and some implications for contemporary policy choices regarding wireless regulation, intellectual property, and communications network design."
A commentary on the Benkler essay by The Economist, at http://www.economist.com/finance/displayStory.cfm?story_id=3623762
- Quote on digitalization as 'total automation', by Jan Soderbergh
"The state of total automation hinted at by Ernest Mandel would be reached when fixed capital, without any injection of living labour, spit out an infinite volume of goods at instant speed. It is hard to imagine a machine with such dimensions, less than visualising futurist gadgets or (just slightly more down-to-earth) nanotechnology fancies. And yet, it is reality in most forms of cultural and immaterial production. That is what is meant by saying, that information can be copied infinitely without injecting additional living labour.Digitalisation of immaterial labour has leapfrogged capitalism to the endpoint of total automation. There is hardly any value-adding labour taking place in this form of production. One click is all labour it takes to duplicate immaterial goods. The main input of living labour is instead at the start-up of the production process. In other words, in the innovation of it. This is where we find immaterial labour. All forms of labour that can be objectified in digits are subject to infinite reproducibility. It is the Pyrrhic victory of capital. The end destination of capital's long quest to disband living labour by perfecting the techniques of separating and storing human creativity in systematised, codified knowledge. However, like Phoenix, living labour returns with a vengeance."
- An interview with the McKenzie Wark, at http://frontwheeldrive.com/mckenzie_wark.html
- Definition and comment on the vectoral class, by Mackenzie Wark
"Information, like land or capital, becomes a form of property monopolised by a class of vectoralists, so named because they control the vectors along which information is abstracted, just as capitalists control the material means with which goods are produced, and pastoralists the land with which food is produced. Information circulated within working class culture as a social property belonging to all. But when information in turn becomes a form of private property, workers are dispossessed of it, and must buy their own culture back from its owners, the vectoralist class. The whole of time, time itself, becomes a commodified experience. Vectoralists try to break capital's monopoly on the production process, and subordinate the production of goods to the circulation of information. The leading corporations divest themselves of their productive capacity, as this is no longer a source of power.
Their power lies in monopolising intellectual property – patents and brands – and the means of reproducing their value – the vectors of communication. The privatisation of information becomes the dominant, rather than a subsidiary, aspect of commodified life. As private property advances from land to capital to information, property itself becomes more abstract. As capital frees land from its spatial fixity, information as property frees capital from its fixity in a particular object. … Information, once it becomes a form of property, develops beyond a mere support for capital – it becomes the basis of a form of accumulation in its own right … The vectoral class comes into its own once it is in possession of powerful technologies for vectoralising information. The vectoral class may commodify information stocks, flows, or vectors themselves. A stock of information is an archive, a body of information maintained through time that has enduring value. A flow of information is the capacity to extract information of temporary value out of events and to distribute it widely and quickly. A vector is the means of achieving either the temporal distribution of a stock, or the spatial distribution of a flow of information. Vectoral power is generally sought through the ownership of all three aspects."
- Definition and comment on the vector, by Mackenzie Wark
"In epidemiology, a vector is the particular means by which a given pathogen travels from one population to another. Water is a vector for cholera, bodily fluids for HIV. By extension, a vector may be any means by which information moves. Telegraph, telephone, television, telecommunications: these terms name not just particular vectors, but a general abstract capacity that they bring into the world and expand. All are forms of telesthesia, or perception at a distance. A given media vector has certain fixed properties of speed, bandwidth, scope and scale, but may be deployed anywhere, at least in principle. The uneven development of the vector is political and economic, not technical… With the commodification of information comes its vectoralisation. Extracting a surplus from information requires technologies capable of transporting information through space, but also through time. The archive is a vector through time just as communication is a vector that crosses space... The vectoral class may commodify information stocks, flows, or vectors themselves. A stock of information is an archive, a body of information maintained through time that has enduring value. A flow of information is the capacity to extract information of temporary value out of events and to distribute it widely and quickly. A vector is the means of achieving either the temporal distribution of a stock, or the spatial distribution of a flow of information. "
- Definition and comment on the hacker class, by Mackenzie Wark
"The hacker class, producer of new abstractions, becomes more important to each successive ruling class, as each depends more and more on information as a resource. The hacker class arises out of the transformation of information into property, in the form of intellectual property, including patents, trademarks, copyright and the moral right of authors. The hacker class is the class with the capacity to create not only new kinds of object and subject in the world, not only new kinds of property form in which they may be represented, but new kinds of relation beyond the property form. The formation of the hacker class as a class comes at just this moment when freedom from necessity and from class domination appears on the horizon as a possibility…. Hackers must calculate their interests not as owners, but as producers, for this is what distinguishes them from the vectoralist class. Hackers do not merely own, and profit by owning information. They produce new information, and as producers need access to it free from the absolute domination of the commodity form. Hacking as a pure, free experimental activity must be free from any constraint that is not self imposed. Only out of its liberty will it produce the means of producing a surplus of liberty and liberty as a surplus. " (http://subsol.c3.hu/subsol_2/contributors0/warktext.html)
- The emergence of Peer to Peer exchanges, such as ZopaWeb
Zopa (Zone of Possible Agreement): the new company is an amalgam of a number of business philosophies. It is where eBay meets credit unions by way of easyJet, the peer-to-peer movement and Betfair. You can lend up to £25,000 through Zopa and your money is divided among 50 borrowers (who have already been screened to ensure they have good credit ratings) to minimise risks of default.
( http://www.guardian.co.uk/economicdispatch/story/0,12498,1435623,00.html )
Zopa is at http://www.zopa.com/ZopaWeb/
A successful German lending and borrowing experiment, dieborger.de, at http://theage.com.au/articles/2005/03/17/1110913726676.html?oneclick=true
Other peer-based exchanges, are described here at http://www.wired.com/news/culture/0,1284,66800,00.html