General Collective Intelligence

From P2P Foundation
Jump to navigation Jump to search

Discussion

Andy Williams:

1.

"A General Collective Intelligence is a hypothetical platform which organizes groups of freely acting individuals according to these same principles. GCI has been demonstrated to have the potential to exponentially increase impact on collective problems, such as exponentially increasing impact on the Sustainable Development Goals per program dollar. We often say that such and such a problem could be solved “if only everyone …” but fail to take the next step in reasoning that if the stable dynamic in a system is not for a given mode of cooperation to exist, the only reasonable thing to do is to change the system. Without understanding why General Collective Intelligence is required for massive bottom-up cooperation to be reliably achievable, in the face of collective challenges we can only act reflexively according to known principles that have not proved successful in solving those collective challenges before, and without sufficient clarity to be able to see what we have been missing, resulting in us spending our energies pursuing solutions that cannot bear fruit. Just as multicellular cooperation is part of everything an organism successfully accomplishes, GCI is part of every solution to every group problem. It can potentially be incorporated into any product or service, e.g. it can be incorporated into technology to create news media, social media, or even search engines that can’t be censored to support any self-serving narrative. Upon hearing self-serving narratives our inclination might be to combat them by spreading what we believe to be the truth, but spreading information to fight self-serving narratives is only reliably useful where it creates the potential for asymmetric impact. Interventions must have the capacity to succeed through liberating only a few free thinking individuals, even where the number of individuals conditioned to be susceptible to such narratives by the educational system, by the media, or by apathy in the face of the demands of daily life, might be far, far larger.

...


Any decision-making process executed by a group of individuals, even those which appear to be decentralized, is centralized where it contains processes that it cannot change itself through decentralized processes (e.g. donor processes that don't allow input from the beneficiaries of that donation). By this definition, since GCI has not yet been implemented, I suggest that no truly decentralized decision-making system exists today and therefore that some collective problems are not reliably solvable. Alternatively, diversity, inequality, or any other property of groups can potentially be leveraged by a system of optimization that serves the interests of the majority or even all individuals, which is referred to here as a system of "collective optimization". Whereas any centralized problem-solving process cannot reliably be constrained so that it’s most stable dynamic is not serving those centralized interests, a system of collective optimization (such as a GCI) cannot be constrained to so that it's most stable dynamic is not serving the collective, or in other words it has dynamically stable collective general problem-solving ability. Finally, GCI isn't any form of communism or collectivism. In fact it isn't any ideology at all. It's just a system that creates the potential capacity to exponentially increase the general problem-solving ability of groups, and therefore to increase their ability to solve any problem that is important to them." ([ https://groups.google.com/d/msgid/intellectual-deep-web/530850241597673216%40gmail.com.])


2.

"Natural systems can self-organize in a way that reliably achieves an exponential increase in collective outcomes. Examples of this can be seen every day in that a seed can grow from a plant with a single leaf into a tree with millions of leaves that it must be assumed can exponentially increase its ability to achieve some outcome such as photosynthesis. Otherwise the cooperation within that collection of an exponentially greater number of cells would not be stable and would not reliably be able to become that tree. Assuming that problem-solving ability is a reflection of ability to achieve some outcome through problem-solving, this exponential increase in collective outcomes reflects an exponential increase in collective general problem-solving ability. General Collective Intelligence attempts to abstract this pattern for increasing problem-solving ability and replicates it so it might potentially be incorporated into every product or service. As explored in a number of upcoming papers, this implies removing hidden barriers to vastly increasing and accelerating our collective capacity to implement the peer to peer platforms the P2P Foundation is focused on.

It's also obvious in the fact that single celled plants aren't observed to grow into trees that the problem of achieving this exponential increase in outcomes can't reliably be solved by any single cell that organizes its behavior according to what optimizes outcomes for the cell itself, rather than organizing its behavior according to what optimizes outcomes for the organism as a collective of cells. Assuming this ability to solve some collective problems can only be achieved through a system of collective optimization, General Collective Intelligence attempts to replicate this collective optimization in removing hidden barriers to solving problems like that of implementing such peer to peer platforms able to achieve far greater collective impact.

The larger importance is that this natural pattern suggests that without such an approach certain collective problems (such as the sustainable development goals) might not be reliably solvable. If observation of nature suggests that some problems in groups of entities require collective optimization in order to be reliably solvable then it might be assumed that a system of collective optimization must be able to explore solutions that individual entities don't reliably explore. In addition, if the problem of achieving some level of an outcome requires a system of collective optimization achieve an exponential increase in the number of entities involved in executing those solutions in order for the problem to be reliably solvable, then it might be assumed that a system of collective optimization must be able to reliably self-assemble those entities into self-sustaining networks at the level of participation required. I hope to outline why observation of natural systems suggests that a system of collective optimization of our capacity to solve wicked societal problems (like potentially GCI) is required for solutions to be possible, and to be reliably achievable. How these arguments might be proven is a matter for speculation, but it is hoped there is sufficient value in the observation that nature has already developed a solution to our largest problems and has proven the effectiveness of this solution over billions of years, for the pattern observed in that solution to be worth describing." (https://groups.google.com/d/msgid/intellectual-deep-web/538913239655906304%40gmail.com)


Critique of the Concept by Alexander Bard

Alexander Bard:

"Massively increased speed of communication, storage of data, and processing capacity will undoubtedly change the world radically. What we call "symbiotic intelligence" is bound to explode. But this is strictly a technological revolution in the McLuhanite sense.

Because this does not mean any change in itself for human intelligence. Our babies born are just as stupid or even more stupid than we are. W are lucklt though to possess a "pathos" that no current technology can even remotely pursue. But the "logos" is increasingly in tech hands, as it has been for our own survival's sake since the birth of technological civilization.

So why are we talking about "general collective intelligence" here? And why these spooky Wilberesque levels of intelligence here? This makes no sense but rather smacks of the usual old IQ astrology arguments.

Why not instead work with "paradigmatics" as the science of how humans can best adapt to an increasingly intelligent (and very likely also totalitarian) informationalist environment? Period. So I'm not interested in what babbling humans can do. I'm interested in what machines can do to add value to human enterprises without us all going down towards an apocalyptic end. "Symbiotic intelligence" is that term. Nothing general and nothing collective. Collaborative." (https://groups.google.com/d/msgid/intellectual-deep-web/CAPgYmjXXWA9SMSL4hpeqpc3ZXsTT6fRL6CprtH%2BeKLKV6xgNgg%40mail.gmail.com)