Complex Adaptive Systems

From P2P Foundation
Jump to: navigation, search


  • "A Complex Adaptive System (CAS) is an entity consisting of many diverse and autonomous components or parts called agents which are interrelated, interdependent, linked through many interconnections, and behave as a unified whole in learning from experience and in adjusting (not just reacting) to changes in the environment. Each individual agent of a CAS is itself a CAS: a tree, for example, is a CAS within a larger CAS (a forest) which is a CAS in a still larger CAS (an ecosystem). Similarly a member of a group is just one CAS in a chain of several progressively encompassing a community, a society, and a nation. Each agent maintains itself in an environment which it creates through its interactions with other agents. Every CAS is more than the sum of its constituting agents and its behavior and properties cannot be predicted from the behaviors and properties of the agents. CAS are characterized by distributed and not centralized control and, unlike rigid systems, they change in response to the feedback received from their environment to survive and thrive in new situations." [1]
  • “A Complex Adaptive System (CAS) is a dynamic network of many agents (which may represent cells, species, individuals, teams, firms, nations) acting in parallel, constantly acting and reacting to what the other agents are doing. The control of a CAS tends to be highly dispersed and decentralized. If there is to be any coherent behavior in the system, it has to arise from competition and cooperation among the agents themselves. The overall behavior of the system is the result of a huge number of decisions made every moment by many individual agents.” [2]
  • "A CAS behaves/evolves according to three key principles: order is emergent as opposed to predetermined (c.f. Neural Networks), the system's history is irreversible, and the system's future is often unpredictable. The basic building blocks of the CAS are agents. Agents scan their environment and develop schema representing interpretive and action rules. These schema are subject to change and evolution."[3]


Dave Snowden:

"Distributed cognition means far more than the more popular phrase "wisdom of crowds," which is a misnomer because crowds can be more foolish than wise. Distributed cognition means using network intelligence. The classic example is the Grameen Bank or micro-lending, with self-forming lending groups determining loan allocation rather than centralized credit scoring. In the context of modern management practice, that means the top-down stimulation of bottom-up activity. It’s not about delegation per se, or the absence of management, but it is about using the capacity of diverse networks to contribute to decision-making and system design; shifting the analyst from prime investigator and interpreter to a role of synthesis; and allowing systems to emerge through the interaction of people with software, rather than designing that use in advance.

Finely granulated objects have more utility than chunked up documents (information) or massive organizational empires. The basic idea is simple: Small things are more adaptable than big things, and they are frequently more interesting and more able to gain our attention. People will spend more time surfing the Web and using the fragmented material of an RSS feed than reading documents. It’s easier to write a blog than a book. Fine granularity material can combine in novel and different ways more easily than formal documents. Fragmented stories of partial failure create more learning than formal documents summarizing best practice. Fragmented material can combine and recombine in novel and different ways, a form of conceptual blending. In organizations, small, self-forming teams are more adaptive than matrix structures. Networks adapt faster than hierarchies.

Disintermediation is one of those interesting words that border on jargon, but it is too useful to abandon. It means removing the layers that separate decision makers from raw data—allowing them to move from an abstract representation of a large data set, spot patterns and anomalies, and focus on the five or six items to which they really need to pay attention. There is an ethical dimension to this too. When people encounter real stories/pictures, etc., they are far more likely to gain empathy and understanding, and therefore make more contextually aware decisions." (via email, November 2013)


Ten Governance characteristics, as outlined by Dave Pollard at

"ten things to remember about complex adaptive systems (which include all social and ecological systems):

1. It is impossible to know 'enough' about such systems to prescribe blanket 'solutions' to 'problems' in such systems: There are too many variables. A one size answer never fits all in such systems.

2. The wisdom of crowds is essential to even a basic understanding of such systems: The more people involved in understanding, thinking about, and making decisions about such systems, the more likely those decisions are to be effective. And that means diverse people – including front line workers and even (gasp!) customers, not just larger groups of egotistical muddle-headed managers.

3. Such systems are unpredictable: Because there are so many variables, many of them unknown, it is folly to even attempt to predict what will happen, even in the short term. It is even more folly to reward senior people for having guessed 'right' or to penalize them for having guessed 'wrong'.

4. Many of the variables in such systems are uncontrollable: Big organizations have this crazy belief that they have the power to change markets and processes, when in fact all their billion-dollar ad campaigns do is tap into (or fail to tap into) a latent demand, and their process changes mostly show up only in the procedure manuals and Intranet sites. If they're actually implemented on the front lines, chances are the 'improvements' were already being done unofficially because the people on the front lines already realized their value. And if (as is often the case) the management-driven process change actually makes things worse, the people on the front line will simply find a workaround that makes it appear that they're complying when they're really not. Front-line workers are expert at this. What's really ridiculous is that senior executives and consultants are unfairly rewarded when their changes 'work' (i.e. when profit rises in the following quarter) and unfairly penalized when they don't, when in reality any correlation between the process/program changes and subsequent changes to the bottom line are almost always unrelated, sheer coincidence.

5. In such systems, prevention is difficult but better than a cure after the fact: You don't need to be able to predict disaster to be able to put in steps to help prevent it. Prevention requires imagination, and unfortunately we live in a world (especially true in large organizations, where imagination is actively discouraged) of terrible imaginative poverty. And when organizations do think about 'what could go wrong', their thoughts are almost always mis-focused on external risk mitigation. You can't prevent disruptive innovations coming from outside your organization. You can prevent all your best people from leaving because they're underappreciated and disengaged.

6. In such systems there are no 'best practices' or 'best policies': Every situation in complex adaptive systems is unique. Trust the people closest to that situation to know what to do, don't try to impose some practice that worked well in some completely different context (though telling a story about that practice might help those closest to the situation decide whether it could be adapted to their situation). Likewise, don't expect any policy to suit all situations, and don't insist that policies be followed blindly. Trust (there's that word again) the people closest to the situation to know what's best to do.

7. In such systems, great models can spread but they usually can't be scaled: Most of the huge inept bureaucracies in our world probably started out as effective, well-functioning small-scale experiments. As soon as such experiments get recognized as 'working models' (not the same as 'best practices', though many people, alas, don't know the difference), there's a tendency for them to spread virally, and get adapted to suit different circumstances. That's a good thing. But there's also a tendency for someone to try to make them work on a larger (sometimes much larger) scale. If you don't understand why this almost always fails, re-read Small is Beautiful.

8. There is a tendency for those working in such systems to presume 'learned helplessness' of customers and employees: The customer, the citizen, is often viewed as a mere, passive consumer of your organization's products and so-called wisdom. The employee, likewise, is assumed to be ignorant, stupid and disinterested in the success of the organization beyond his/her own job. Most people don't take kindly to having their intelligence insulted. And failure to engage customers and employees in co-producing the product is a tragic waste of great opportunity. The key is knowing how to engage them: Not through passive questionnaires or surveys, but through conversations, stories, and presenting the 'problem' to them so they can help you appreciate it better and then address it. Learned helplessness is widespread, but it's an easily curable disease.

9. In such systems, genuine decentralization is almost always a good idea: That means pushing out real authority along with responsibility. It means making a patient investment in people as they learn from mistakes (by patient I mean years, not months, and the investment includes writing off the mistakes as professional development, not penalizing them). It means setting realistic goals, providing appropriate money without strings attached or second guessing, and letting small decentralized units self-manage. The only crimes of a self-managing unit that should result in re-assessment are (a) protracted failure of the unit to collaborate well amongst themselves and (b) prolonged dissatisfaction of customers. And both problems suggest that you have incompetent people in the units, not that the idea of having decentralized self-managing units is a bad idea.

10. In such systems, networks outperform hierarchies: This is a corollary of the other nine tenets of complex adaptive systems. Information, ideas and working models spread faster and more effectively peer-to-peer than up and down hierarchies." (

More Information

Notes about Complex Adaptive Systems

1. Lansing on Complex Adaptive Systems


  2. Santa Fe Institute’s John H. Holland,
  3. K. Dooley, AZ State University,