Peer Production: Difference between revisions

From P2P Foundation
Jump to navigation Jump to search
Line 32: Line 32:
Source: Bauwens, M. (2006). The [[Political Economy of Peer Production]]. Post-Autistic Economics Review (37).
Source: Bauwens, M. (2006). The [[Political Economy of Peer Production]]. Post-Autistic Economics Review (37).


3. George Dafermos:
Peer production "projects produce a good that is free to use, modify and redistribute (the 'commons' part) and, on the other, their development process is based on the self-selection of tasks by their developers, while (important) decisions are being made collectively on the basis of consensus (the 'peer' aspect). In short, in pure peer production there's no distinction between those who work and those who manage.
So, by using these two criteria as the crucial dimensions (the *commons* and *peer* part), hybrid peer production models can be understood as those that, to some extent, detract from the common property regime and the collective decision making model characteristic of pure peer production. More specifically, they produce something that is free to use and modify, but not to redistribute, as the 'parent company' decides and controls what goes into the official distribution. Such 'hybrid' undertakings are typically part of a company's business model (e.g. 'give away the razor, sell the blades') – that is to say, it's a mode of production directed to market exchange. In parallel, though this model encourages outside contributions and opens up participation in the product development process to a wider number of participants than traditional business models, the governance of these projects is always subject to some degree of centralised, top-down control. Examples I'd include in this category are openoffice, Mozilla (especially in the years before 2005) and a plethora of small companies making and selling a FOSS product like MySQL and Canonical (Ubuntu)."
(email, January 2012)
==P2P Commentary==
Michel Bauwens:
In my own definition, based on the idea of a [[Circulation of the Common]], I have used three criteria:
1) open input: contributors are free to contribute and can produce or have access to, free and open 'raw material'
2) process: marked by participatory governance
3) output: the output is put in a 'commons' that can be used iteratively to create new layers of open and free input


=Characteristics=
=Characteristics=

Revision as of 09:43, 9 January 2012

= People cooperate voluntarily on an equal footing (as peers) in order to reach a common goal. [1]

Introductory Citations

when costs of participation are low enough, any motivation may be sufficient to lead to a contribution.

- Michael Feldstein [2]

"Peer production is viable when: 1. capital costs (needed for production) fall far enough and 2. coordination costs fall far enough. Cheap computing and communication reduce both of these exponentially, so peer production becomes inevitable." (http://jed.jive.com/?p=23)


Definition

1.

"Commons-based peer production is a term coined by professor Yochai Benkler to describe a new model of economic production, different from both markets and firms, in which the creative energy of large numbers of people is coordinated (usually with the aid of the internet) into large, meaningful projects, largely without traditional hierarchical organization or financial compensation." (http://en.wikipedia.org/wiki/Peer_production)


2. Jose Ramos:

" Peer to peer production describes a peer based production of goods and services. While inter-related, it is different to crowd sourcing in that the locus of control in the production of goods and services is not exercised by a firm, government or a particular institution for its benefit, but rather the production of goods and services is a collaborative affair among individuals in an emergent community. Michel Bauwens, founder of the Peer-to-peer Foundation, has documented the emergence of a peer-to-peer culture globally. He argues that fundamental p2p shifts include:

1) A New Mode of Production – Peer-to-peer systems “produce use-value through the free cooperation of producers who have access to distributed capital: this is the P2P production mode, a 'third mode of production' different from for-profit or public production by state-owned enterprises. Its product is not exchange value for a market, but use-value for a community of users.”

2) A New Mode of Governance - Peer-to-peer systems “are governed by the community of producers themselves, and not by market allocation or corporate hierarchy: this is the P2P governance mode, or 'third mode of governance.’”

3) A New Mode of Distribution - Peer-to-peer systems “make use-value freely accessible on a universal basis, through new common property regimes. This is its distribution or 'peer property mode': a 'third mode of ownership,' different from private property or public (state) property.“ Bauwens (2006) (http://dev.services2020.net/node/1322)

Source: Bauwens, M. (2006). The Political Economy of Peer Production. Post-Autistic Economics Review (37).


3. George Dafermos:

Peer production "projects produce a good that is free to use, modify and redistribute (the 'commons' part) and, on the other, their development process is based on the self-selection of tasks by their developers, while (important) decisions are being made collectively on the basis of consensus (the 'peer' aspect). In short, in pure peer production there's no distinction between those who work and those who manage.

So, by using these two criteria as the crucial dimensions (the *commons* and *peer* part), hybrid peer production models can be understood as those that, to some extent, detract from the common property regime and the collective decision making model characteristic of pure peer production. More specifically, they produce something that is free to use and modify, but not to redistribute, as the 'parent company' decides and controls what goes into the official distribution. Such 'hybrid' undertakings are typically part of a company's business model (e.g. 'give away the razor, sell the blades') – that is to say, it's a mode of production directed to market exchange. In parallel, though this model encourages outside contributions and opens up participation in the product development process to a wider number of participants than traditional business models, the governance of these projects is always subject to some degree of centralised, top-down control. Examples I'd include in this category are openoffice, Mozilla (especially in the years before 2005) and a plethora of small companies making and selling a FOSS product like MySQL and Canonical (Ubuntu)." (email, January 2012)


P2P Commentary

Michel Bauwens:

In my own definition, based on the idea of a Circulation of the Common, I have used three criteria:

1) open input: contributors are free to contribute and can produce or have access to, free and open 'raw material'

2) process: marked by participatory governance

3) output: the output is put in a 'commons' that can be used iteratively to create new layers of open and free input

Characteristics

The Four Constituent Building Blocks

Christian:

"four essential building blocks of generalized peer production:

1. Voluntary cooperation among peers: Peer production is goal-driven people cooperate in order to reach a shared goal. Participants decide for themselves whether and how to get involved; nobody can order others around. Cooperation is stigmergic: people leave hints about what there is to do and others decide voluntarily which hints (if any) to follow.

2. Common knowledge: Digital peer production is based on treating knowledge as a commons that can be used, shared, and improved by all. Projects developing and sharing free design information on how to produce, use, repair and recycle physical goods (often called open-source hardware) provide a basis for physical peer production.

3. Common resources: Free design information is not enough for physical production access to land and other natural resources is essential as well. In the logic of peer production, these too become commons to be used, shared (in a fair manner) and maintained by all.

4. Distributed, openly accessible means of production: In peer production, the means of production tend to be distributed among many people there is no single person or entity controlling their usage. Hackerspaces, Fab Labs, and mesh networks provide the basis for a distributed physical production infrastructure. If the machines and other equipment used in such open making facilities become themselves the result of peer production, the circle is closed: Peer producers can jointly produce, use and manage their own productive facilities, allowing to overcome the dependency on proprietary, market-driven production." (http://fscons.org/extensions/self-organized-plenty-emergence-physical-peer-production)


The ten social patterns of peer production

See: Peer Production Patterns. By Stefan Meretz.

   * Beyond Exchange
   * Beyond Scarcity
   * Beyond Commodity
   * Beyond Money
   * Beyond Labor
   * Beyond Classes
   * Beyond Exclusion
   * Beyond Socialism
   * Beyond Politics
   * Germ Form

May be missing in Stefan's list: no returns on property, i.e. "Beyond Property" ?

Aspects of Peer production practice

  1. Anti-Credentialism
  2. Anti-Rivalry ; see also: Anti-Rivalness of Free Software
  3. Communal Validation
  4. Equipotentiality
  5. For Benefit ; see also:Benefit Sharing; Benefit-Driven Production
  6. Forking
  7. Holoptism
  8. Modularity
  9. Negotiated Coordination
  10. Produsage
  11. Stigmergy
  12. Task Work


Guidelines for Successful Cooperation in Peer Production

Christian Siefkes:


"1. Find other people who have the same (or a similar) problem or goal as you.

2. Join forces with them in order to produce what you want to have or achieve (need-driven production).

3. Be fair and accept the others as your peers—since you all participate voluntarily, nobody can order others around.

4. Be generous and share what you can. By doing so, you’ll attract further users, some of whom will sooner or later turn into contributors. There is rarely a clear separation between users and contributors, but rather a smooth transition: most participants use your product only, some contribute occasionally, and a small percentage contributes intensely on a long-time basis.

5. Be open and welcoming to make it easy for “newbies” to join and contribute to your project.

6. Leave hints on what there is to do and which contributions you would like to see (Stigmergy). Frequently some other participants will take up a hint and self-select to handle one of the wanted tasks. The more participants care for a task, the more visible the hints will be, increasing the chance that somebody self-selects for the task.

7. Jointly develop the rules and structures that are most suitable for reaching your goals.

8. Strive to reach rough consensus regarding the goals of your projects and the best ways of realizing them. Narrow or arbitrary decisions will tend to drive away the people that disagree with them.

9. But if you really can’t agree on an issue, that’s not so bad. Just fork the project and do your own thing." (http://www.keimform.de/2010/self-organized-plenty/)

Why is it emerging now?

Yochai Benkler

Yochai Benkler advances a powerful hypothesis, that lowering the capital requirements of information production


1. reduces the value of proprietary strategies and makes public, shared information more important,

2. encourages a wider range of motivations to produce, thus demoting supply-and-demand from prime motivator to one-of-many, and

3. allows large-scale, cooperative information production efforts that were not possible before, from open-source software, to search engines and encyclopedias, to massively multi-player online games.

See his book: The Wealth of Networks


Clay Shirky

Felix Stalder explains how Clay Shirky's amends Yochai Benkler's take:

" There are limits to the scale particular forms of organisation can handle efficiently. Ever since the publication of Roland Coase's seminal article ‘The Nature of the Firm’ in 1937, economists and organisational theorists have been analysing the ‘Coasian ceiling’. It indicates the maximum size an organisation can grow to before the costs of managing its internal complexity rise beyond the gains the increased size can offer. At that point, it becomes more efficient to acquire a resource externally (e.g. to buy it) than to produce it internally. This has to do with the relative transaction costs generated by each way of securing that resource. If these costs decline in general (e.g. due to new communication technologies and management techniques) two things can take place. On the one hand, the ceiling rises, meaning large firms can grow even larger without becoming inefficient. On the other hand, small firms are becoming more competitive because they can handle the complexities of larger markets. This decline in transaction costs is a key element in the organisational transformations of the last three decades, creating today's environment where very large global players and relatively small companies can compete in global markets. Yet, a moderate decline does not affect the basic structure of production as being organised through firms and markets.

In 2002, Yochai Benkler was the first to argue that production was no longer bound to the old dichotomy between firms and markets. Rather, a third mode of production had emerged which he called ‘commons-based peer production’. Here, the central mode of coordination was neither command (as it is inside the firm) nor price (as it is in the market) but self-assigned volunteer contributions to a common pool of resources. This new mode of production, Benkler points out, relies on the dramatic decline in transaction costs made possible by the internet. Shirky develops this idea into a different direction, by introducing the concept of the ‘Coasian floor’.

Organised efforts underneath this floor are, as Shirky writes,

‘valuable to someone but too expensive to be taken on in any institutional way, because the basic and unsheddable costs of being an institution in the first place make those activities not worth pursuing’.

Until recently, life underneath that floor was necessarily small scale because scaling up required building up an organisation and this was prohibitively expensive. Now, and this is Shirky's central claim, even large group efforts are no longer dependent on the existence of a formal organisation with its overheads. Or, as he memorably puts it, ‘we are used to a world where little things happen for love, and big things happen for money. ... Now, though, we can do big things for love’. (http://www.metamute.org/en/content/analysis_without_analysis)


Michel Bauwens

Read an excerpt from the P2P Foundational Manifesto at http://p2pfoundation.net/index.php/3._P2P_in_the_Economic_Sphere

Characteristics of Peer Production

Michel Bauwens

An excerpt from the manuscript.

3.3.C. Beyond Formalization, Institutionalization, Commodification

Observation of commons-based peer production and knowledge exchange, unveils a further number of important elements, which can be added to our earlier definition and has to be added to the characteristic of holoptism just discussed in 3.4.B.

In premodern societies, knowledge is ‘guarded’, it is part of what constitutes power. Guilds are based on secrets, the Church does not translate the Bible, and it guards its monopoly of interpretation. Knowledge is obtained through imitation and initiation in closed circles.

With the advent of modernity, and let’s think about Diderot’s project of the Encyclopedia as an example, knowledge is from now on regarded as a public resource which should flow freely. But at the same time, modernity, as described by Foucault in particular, starts a process of regulating the flow of knowledge through a series of formal rules, which aim to distinguish valid knowledge from invalid one. The academic peer review method, the setting up of universities which regulate discourse, the birth of professional bodies as guardians of expertise, the scientific method, are but a few of such regulations. An intellectual property rights regime also regulates the legitimate use one can make of such knowledge, and which is responsible for a re-privatization of knowledge. If original copyright served to stimulate creation by balancing the rights of authors and the public, the recent strengthening of intellectual property rights can be more properly understood as an attempt at ‘enclosure’ of the information commons, which has to serve to create monopolies based on rent obtained through licenses. Thus at the end of modernity, in a similar process to what we described in the field of work culture, there is an exacerbation of the most negative aspects of the privatization of knowledge: IP legislation is incredibly tightened, information sharing becomes punishable, the market invades the public sphere of universities and academic peer review and the scientific commons are being severely damaged.

Again, peer to peer appears as a radical shift. In the new emergent practices of knowledge exchange, equipotency is assumed from the start. There are no formal rules to prohibit anyone from participation, a characteristic that could be called 'anti-credentialism' . (unlike academic peer review, where formal degrees are required ). Validation is a communal intersubjective process. It often takes place through a process akin to swarming, whereby large number of participants will tug at the mistakes in a piece of software or text, the so-called 'piranha effect', and so perfect it better than an individual genius could. Many examples of this kind are described in the book 'The Wisdom of Crowds', by James Surowiecki. Though there are constraints in this process, depending on the type of governance chosen by various P2P projects, what stands out compared to previous modes of production is the self-selection aspect. Production is granular and modular, and only the individuals themselves know exactly if their exact mix of expertise fits the problem at hand. We have autonomous selection instead of heteronomous selection.

If there are formal rules, they have to be accepted by the community, and they are ad hoc for particular projects. In the Slashdot online publishing system which serves the open source community, a large group of editors combs through the postings, and there’s a complex system of ratings of the editors themselves; in other systems every article is rated creating a hierarchy of interest which pushes the lesser-rated articles down the list. As we explained above, in the context of knowledge classification, there is a move away from institutional categorization using hierarchical trees of knowledge, such as the bibliographic formats (Dewey, UDC, etc..), to informal communal ‘tagging’, what some people have termed folksonomies. In blogging, news and commentary are democratized and open to any participant, and it is the reputation of trustworthiness, acquired over time, by the individual in question, which will lead to the viral diffusion of particular ‘memes’. Power and influence are determined by the quality of the contribution, and have to be accepted and constantly renewed by the community of participants. All this can be termed the de-formalization of knowledge.

A second important aspect is de-institutionalization. In premodernity, knowledge is transmitted through tradition, through initiation by experienced masters to those who are validated to participate in the chain mostly through birth. In modernity, as we said, validation and the legitimation of knowledge is processed through institutions. It is assumed that the autonomous individual needs socialization, ‘disciplining’, through such institutions. Knowledge has to be mediated. Thus, whether a news item is trustworthy is determined largely by its source, say the Wall Street Journal, or the Encyclopedia Brittanica, who are supposed to have formal methodologies and expertise. P2P processes are de-institutionalized, in the sense that it is the collective itself which validates the knowledge.

Please note my semantic difficulty here. Indeed, it can be argued that P2P is just another form of institution, another institutional framework, in the sense of a self-perpetuating organizational format. And that would be correct: P2P processes are not structureless, but most often flexible structures that follow internally generated rules. In previous social forms, institutions got detached from the functions and objectives they had to play, became 'autonomous'. In turn because of the class structure of society, and the need to maintain domination, and because of 'bureaucratization' and self-interest of the institutional leaderships, those institutions turn 'against society' and even against their own functions and objectives. Such institutions become a factor of alienation. It is this type of institutionalization that is potentially overcome by P2P processes. The mediating layer between participation and the result of that participation, is much thinner, dependent on protocol rather controlled by hierarchy.

A good example of P2P principles at work can be found in the complex of solutions instituted by the University of Openness. UO is a set of free-form ‘universities’, where anyone who wants to learn or to share his expertise can form teams with the explicit purpose of collective learning. There are no entry exams and no final exams. The constitution of teams is not determined by any prior disciplinary categorization. The library of UO is distributed, i.e. all participating individuals can contribute their own books to a collective distributed library . The categorization of the books is explicitly ‘anti-systemic’, i.e. any individual can build his own personal ontologies of information, and semantic web principles are set to work to uncover similarities between the various categorizations .

All this prefigures a profound shift in our epistemologies. In modernity, with the subject-object dichotomy, the autonomous individual is supposed to gaze objectively at the external world, and to use formalized methodologies, which will be intersubjectively verified through academic peer review. Post-modernity has caused strong doubts about this scenario. The individual is no longer considered autonomous, but always-already part of various fields, of power, of psychic forces, of social relations, molded by ideologies, etc.. Rather than in need of socialization, the presumption of modernity, he is seen to be in need of individuation. But he is no longer an ‘indivisible atom’, but rather a singularity, a unique and ever-evolving composite. His gaze cannot be truly objective, but is always partial, as part of a system can never comprehend the system as a whole. The individual has a single set of perspectives on things reflecting his own history and limitations. Truth can therefore only be apprehended collectively by combining a multiplicity of other perspectives, from other singularities, other unique points of integration, which are put in ‘common’. It is this profound change in epistemologies which P2P-based knowledge exchange reflects.

A third important aspect of P2P is the process of de-commodification. In traditional societies, commodification, and ‘market pricing’ was only a relative phenomenon. Economic exchange depended on a set of mutual obligations, and even were monetary equivalents were used, the price rarely reflected an open market. It is only with industrial capitalism that the core of the economic exchanges started to be determined by market pricing, and both products and labor became commodities. But still, there was a public culture and education system, and immaterial exchanges largely fell outside this system. With cognitive capitalism, the owners of information assets are no longer content to live any immaterial process outside the purview of commodification and market pricing, and there is a strong drive to ‘privatize everything’, education included, our love lives included Any immaterial process can be resold as commodities. Thus again, in the recent era the characteristics of capitalism are exacerbated, with P2P representing the counter-reaction. With ‘commons-based peer production’ or P2P-based knowledge exchange more generally, the production does not result in commodities sold to consumers, but in use value made for users. Because of the GPL license, no copyrighted monopoly can arise. GPL products can eventually be sold, but such sale is usually only a credible alternative (since it can most often be downloaded for free), if it is associated with a service model. It is in fact mostly around such services that commercial open source companies found their model (example: Red Hat). Since the producers of commons-based products are rarely paid, their main motivation is not the exchange value for the eventually resulting commodity, but the increase in use value, their own learning and reputation. Motivation can be polyvalent, but will generally be anything but monetary.

One of the reasons of the emergence of the commodity-based economy, capitalism, is that a market is an efficient means to distribute ‘information’ about supply and demand, with the concrete price determining value as a synthesis of these various pressures. In the P2P environment we see the invention of alternative ways of determining value, through software algorhythms. In search engines, value is determined by algorhythms that determine pointers to documents, the more pointers, and the more value these pointers themselves have, the higher the value accorded to a document. This can be done either in a general matter, or for specialized interests, by looking at the rankings within the specific community, or even on a individual level, through collaborative filtering, by looking at what similar individuals have rated and used well. So in a similar but alternative way to the reputation-based schemes, we have a set of solutions to go beyond pricing, and beyond monetarisation, to determine value. The value that is determined in this case is of course an indication of potential use value, rather than ‘exchange value’ for the market.


See also our entries on Equipotentiality, Anti-Credentialism, and Communal Validation


Peer Production as Producer-Driven

From http://opencontent.org/blog/archives/332:

"Let’s come back to the consumer-driven / producer-driven question and look at open source software as an example. Open source software projects are successful when:


1. a specific person with a specific need develops a specific solution to **their own** specific problem,

2. that person then shares that specific solution with the world,

3. other specific people with the same or a very similar specific need find the solution and adopt or adapt it to solve their own specific problems, and

4. adaptations and extensions of the solution, developed to make the solution solve additional, closely related problems, are shared with the group.

In other words, open source software is the epitome of a producer-driven work. The work begins life with one producer, one with a specific personality and attitude, and continues life with a group of like-minded producers. These producers engage in a kind of work we might call “produce-to-use,” because they make software to satisfy their own needs. This guarantees that the software they produce will be useful to and used by someone. Of course, every project is happy to have users that might be described as “users-only,” but these do not contribute to the long-term growth or health of the project. " (http://opencontent.org/blog/archives/332)

Reasons for Peer Production

Franz Nahrada summarizes the arguments of Yochai Benkler in The Wealth of Networks:

"The networked information economy improves individual autonomy in three ways. (the following observations partly quoted from Yochai Benkler)

  • First, it improves individuals’ capacities to do more for and by themselves. Take baking for example. The internet offers thousands of different recipes for apple pie. A first time baker no longer needs to buy a Betty Crocker cookbook, call his grandmother for a recipe, or enroll in a cooking class to learn how to bake a pie. All he needs to do is perform a Google search for the phrase “apple pie recipe" Likewise, someone skilled in the art of pie-making and with a wish to share his knowledge does not need technical expertise to share it: he could easily start a blog devoted to pie recipes.
  • Second, it improves individuals’ capacity to do more in loose affiliation with others in a non-market setting. Again, the results of the Google “apple pie recipe" search are an example of the success of this loose uncoordinated affiliation. Another one would be "peer to peer networks" with people exchanging their music collections or the SETI@home example. In this approach the critical issue is an architecture of participation - ‘inclusive defaults for aggregating user data and building value as a side-effect of ordinary use of the application’. Users do not have to positively act to contribute, their ordinary use of the application is structured so as to benefit others.
  • Three, the networked information society improves individuals’ capacity to corporate with others through formal or organized groups that operate outside the market sphere based on voluntary commitment and rules that keep individual contributions in line and workeable. Sometimes hierachies are involved. Wikipedia, the open source software movement,are all examples.

The fluidity and low level (both in terms of money and time) of commitment required for participation in these wide range of projects is just one of the ways in which the networked information economy has enhanced individuals’ autonomy. Even where there are formal structures, cooperation can easily be broken by "taking the repository" and forking, which leads to much different leadership styles than in any other historical organisation." (http://www.globalvillages.info/wiki.cgi?GlobalVillages/FranzNahrada/Workspace/RomeSpeech)


Capitalist motivations for supporting open source-based peer production

Peer Production as Neoliberalism

Mike (Mr. Teacup?) in Peer Production as an Illusion:

Open source's integration in business motivations

"The picture we have of open source software development is that it is the spontaneous activity of volunteer programmers collaborating together in a gift economy with no financial incentives and creating software that’s comparable or better than what’s produced by large for-profit proprietary software vendors like Microsoft. The reality is somewhat different – although this gift economy model is accurate in some cases, most of projects of modest size are funded by corporations in one way or another. This is certainly true of every open source “success story” where the open source product has achieved greater market share than the proprietary one.

The precise form of corporate sponsorship varies: in some situations, the bulk of the code by programmers who are employed by corporations who pay them to contribute to the project. This describes the Linux project. According to an analysis by Linux kernel contributor Jonathan Corbet, 75% of code is written by paid developers working for IBM, Red Hat, Novell, etc. – companies who compete with each other in the marketplace, but cooperate by funding development of the Linux kernel. Another example is WebKit, the main technology behind Google’s Chrome browser, which is run by programmers from Apple, Google, Nokia, Palm, Research in Motion, Samsung and others. The webkit.org domain is owned by Apple and the corporate connection is apparent on the Contributing Code page, which says “If you make substantive changes to a file, you may wish to add a copyright line for yourself or for the company on whose behalf you work.”

A more common business model is services-and-support – a company owns the copyright for the software, pays for and organizes much of development. This is profitable because the company uses almost as a form of advertising to help sell support and consultation services to large businesses who want to use the software. This describes MySQL, PHP, Red Hat, Ubuntu and many others.

These business models evolved in the late 90s and early 2000s, when the question of how open source could be profitable was a hot topic of debate on internet fora. Rarely, someone asked whether open source should be profitable and questioned its inclusion into capitalism, but they were always shouted down. The conclusion to be drawn is that open source software does attract volunteers, but not for serious projects that are competitive with proprietary software. These almost always have substantial corporate backing.

In 2005, Bruce Perens, co-founder of the Open Source Initiative and coiner of the term “open source”, wrote The Emerging Economic Paradigm of Open Source to explain why it is profitable for corporations to fund open source projects, specifically addressing and refuting the claim that open source software is a gift economy. This claim was made by OSI co-founder Eric Raymond in one of the most well-known explication of open source philosophy, The Cathedral and the Bazaar. Perens says:

- Raymond did not attempt to explain why big companies like IBM are participating in Open Source, that had not yet started when he wrote. Open Source was just starting to attract serious attention from business, and had not yet become a significant economic phenomenon. Thus, The Cathedral and the Bazaar is not informed by the insight into Open Source’s economics that is available today. Unfortunately, many people have mistaken Raymond’s early arguments as evidence of a weak economic foundation for Open Source. In Raymond’s model, work is rewarded with an intangible return rather than a monetary one. Fortunately, it’s easy to establish today that there is a strong monetary return for many Open Source developers. But that return is still not as direct as in proprietary software development.

Perens goes on to argue that open source is economically rational for certain classes of software that he calls non-differentiating: technology that is used to support the functions of a business, but isn’t a competitive advantage. These are things like computer operating systems, web servers, web browsers, database software, word processors, spreadsheets, etc. which can be considered part of the infrastructure of a business. Often, differentiating technology is build out of these components, and Perens uses the example of Amazon’s book recommendation software – customers might go to Amazon because of this feature, which Barnes & Noble does not have, so this should be kept proprietary. But customers don’t shop at Amazon because of its amazingly flexible and efficient web servers, so it makes sense for them to collaborate with Barnes and Noble (and vice versa) on this part of their business.

He claims that something like 90% of software is infrastructure, which is a useful analogy. Corporations don’t create their own private roads, they effectively collaborate with their competitors by funding public roads through their tax dollars. Historically, corporations have used private, closed consortia to collaborate on software infrastructure – member companies agree to fund the development of software and make it available to each member. In many cases these have become outmoded and replaced with open source projects, and Perens lists some examples: the Taligent consortium to standardize Unix has been replaced by the open source Linux project, and the Common Desktop Environment was replaced by the open source GNOME Project. But the consortium model continues to be relevant in other areas, especially in developing industry standards: the Unicode Consortium, the World Wide Web Consortium, the Internet Systems Consortium are all funded and staffed by major internet companies.

The history of software consortia is sometimes forgotten even in academic accounts of the open source movement. We’re often presented with an appealing but simplistic David-and-Goliath narrative of a few individual hackers refusing to bend to the trend of proprietary software." (http://www.mrteacup.org/post/peer-production-illusion-part-1.html)

Conditions for Peer Production

Yochai Benkler on the conditions for success

As summarized Tom Abate by at http://www.newcommblogzine.com/?p=509


"Benkler lays out three characteristics of successul group efforts:

“They (the tasks)

1) must be modular. That is, they must be divisible into components, or modules, each of which can be produced independently of the production of the others. This enables production to be incremental and asynchronous, pooling the efforts of different people, with different capabilities, who are available at different times."

2.) “For a peer production process to pool successfully a relatively large number of contributors, the modules should be predominately fine–grained, or small size. This allows the project to capture contributions from large numbers of contributors whose motivation levels will not sustain anything more than small efforts toward the project ...."

3.) “... a successful peer production enterprise must have low–cost integration, which includes both quality control over the modules and a mechanism for integrating the contributions into the finished product, while defending “itself against incompetent or malicious contributors. (http://www.newcommblogzine.com/?p=509)


Don Tapscott in Wikinomics

"Peer Production works best when at least four conditions are present:


1. The object of production is information or culture, which keeps the cost of participation low from contributors

2. Tasks can be chunked out into bite-sized pieces that individuals can contribute in small increments and independently of other producers (i.e. entries in an encyclopedia or components of a software program). This makes their overall investment of time and energy minimal in relation to the benefits they receive in return.

3. Benefits of participation are articulated i.e. content is improved and contributors are compensated.

4. The costs of integrating those pieces into a finished end product, including leadership and quality-control mechanisms, must be low" (http://www.eu.socialtext.net/wikinomics/index.cgi?peer_production)

Chris Ahlert reflects on the expansion of peer production

URL = http://openbusiness.cc/category/models

"The main difference between software and material good productions concerns their outcomes: software and material good. Software is a kind of information and immaterial in its essence, and hence extremely easy to copy, distribute and share. On the other side, material goods are not copyable at all, and are not so easy to share ultimately. This difference leads to an important consequence: when material goods are sold a producer is alienated from the goods sold. Conversely, when software are "sold" a producer does not lose them. To show the possibility of expanding the FOSS model, we draw upon the following analytical division of the production of any economic areas:

(1) The production of `knowledge of production'. It is part of the means of production and mainly an outcome of R&D activities, and a part of that is already in public domain;

(2) The production of `material goods'. It uses the outcome of the previous production and is usually the end product that is sold to consumers (TVs, furniture, cars, etc);

(3) The "production" of services. It may be regarded as the work of installing, fixing and maintenance of material and immaterial goods.

In the software area, however, the end product is not the result of (2), but of (1), since software are part of both the means and the outcome, of the software production, and of course are not material. Software are part of the means to produce more software.In the proprietary model, software are artificially regarded as a material good, and thereby as if were an outcome of the activity (2). The price of the end software endows the cost of the activity (1), whose outcome is kept secret and privately owned. On the contrary, free software are regarded as information and kept free through the GPL-like licenses; software sources are open and produced cooperatively. In the FOSS business model, what pays the "cooperative R&D" to develop free software is the selling of material goods and services related to the software developed cooperatively.

In other areas of economy, concerning material goods, these are not sharable of course, as well are not shareable tools, machinery and other physical infra-structures. However, the knowledge of production is indeed sharable and, in a sense, very similar to software. That is the clue for expanding the FOSS model to other economic areas.In the traditional capitalist model, the knowledge of production is regarded in the same way as software in the proprietary model: the R&D outcome and knowledge of production in general are held secret, privately owned, and, in the specific case of material production, often protected against competitors through patents. So, knowledge of production are mostly developed and owned privately, and their costs are endowed in the price of material goods, and may lead to consumers' locking, monopoly, etc. Conversely, knowledge of production could regarded in the same way as free software: any knowledge of production could be developed cooperatively and owned collectively. We may call it as the Free Open Knowledge of Production (FOKP) model, and think of an specific FOKP for each area of production, from TVs and cars, to furniture and houses. To clarify this idea, we now develop a complete parallel with the FOSS model to show how the FOKP model would work, hypothetically. In the FOKP model, the knowledge of production of any economic areas is developed in a voluntary FOKP community of developers, producers and consumers, which is a huge, strong and friendly community, based on sharing and cooperation, not only on competition and race for money. In this cooperative environment, many gifted developers work together for the common good. There is a free knowledge ideology behind this model, that is about giving freedom to developers, producers and consumers of material goods, unlocking information and supporting free flow of innovation.

There is a key feature in the FOKP model: its GPL-like licenses keep free every new knowledge of production developed, from previous ones. Everyone is free to distribute free knowledge, but only if they distribute it under the same free license, which secures the collective property of free knowledge of production, and assures the 4 freedoms to every developer, producer and consumer:

(0) The freedom to use the knowledge of production, for any purpose;

(1) The freedom to study the knowledge of production, and how the produced good should work, in order to adapt it to your needs;

(2) The freedom to redistribute copies, so you can help your neighbour;

(3) The freedom to improve the knowledge of production, and release improvements to the public, so that the whole community benefits;

Through FOKP licenses, the production of free knowledge becomes intrinsically cooperative and community driven: the entire FOKP community may participate in developing free knowledge of production, reporting problems of goods produced, deciding about new features that are needed in certain goods, writing documentation, translating consumers' needs, etc. In short, free knowledge are produced cooperatively by many people, and free licenses are what adds a magic glue to the FOKP community, the good feeling that comes from doing good for others, and knowing that it will continue to do that good for as long as it is used. The model also leads to a new kind of business: the FOKP business model, which is based on selling only the material goods and services, but not the outcome of R&D activities that is mainly developed cooperatively and owned collectively. Open organisations profit not from a private knowledge of production, but from the proper production of material goods and related services, that is, from the work actually realised to make them. Competition is then accomplished over the kinds, variety, combination and quality of the produced goods and services. Presumably, this new model has several consequences: (a) innovations are more consumer oriented to their actual needs; (b) generally, free knowledge of production are quicker developed, and the material goods produced using them present more quality than the proprietary ones; (c) cooperation and competition are both widely stimulated, speeding technological advance; and (d) consumers' locking and monopolies are naturally avoid. This vision is powerful: it does seem to be feasible in some way or another. But we should be cautious as far the actual viability of the FOKP model." (http://openbusiness.cc/category/models)


Continue to Peer Production - Part Two

Continued at Peer Production - Part Two

What you can find there:

  • 1 Consequences of Peer Production

o 1.1 On the difference between capitalists and entrepreneurs o 1.2 On the difference between for profit and for benefit o 1.3 Conclusion

  • 2 Criticism of Peer Production

o 2.1 Main Arguments summary o 2.2 Nicholas Carr on the limited application field of peer production

  • 3 The Evolution of Peer Production