Meditations on Moloch the Machine God

From P2P Foundation
Jump to navigation Jump to search

* Book: Meditations on Moloch. Scott Alexander.

URL =


Context

Alexander Beiner:

"Moloch The Machine God

Google and Microsoft are in an arms race to create something that could not only destroy the information commons, but potentially all of humanity. Both they and the fish farmers are caught in what’s called a multi-polar trap. A race-to-the bottom situation in which, even though individual actors might have the best of intentions, the incentive structures mean that everyone ends up worse off, and the commons is damaged or destroyed in the process. In his seminal essay ‘Meditations on Moloch’, Scott Alexander personified this universal conundrum in our society as ‘Moloch’. Moloch was a Canaanite god of war, a god people sacrificed their children to for material success. I’ve written about Moloch several times before, and I think Liv Boeree’s explanation from a piece I wrote about psychedelic capitalism is one of the best I’ve heard:

- If there’s a force that’s driving us toward greater complexity, there seems to be an opposing force, a force of destruction that uses competition for ill. The way I see it, Moloch is the god of unhealthy competition, of negative sum games.

Understanding Moloch is crucial for understanding what we’re facing with Artificial Intelligence. Firstly, it reminds us that all AI research is embedded in a socio-economic system that is divorced from anything deeper than winning its own game. We might call on a halt to research, or ask for coordination around ethics, but it’s a tall order. It just takes one actor not to play (to not turn off their metaphorical fish filter), and everyone else is forced into the multi-polar trap."

(https://beiner.substack.com/p/reality-eats-culture-for-breakfast?)


Discussion

Alexander Beiner:

"To see why a humanistic stance isn’t enough to create ethical technology, let’s imagine for a moment that Moloch is more than just a metaphor. Instead, it’s an unseen force, an emergent property of the complex system we create between all our interactions as human beings. Those interactions are driven by behaviors, memes, ideas and cultural values which are all based on what we think is real and what we feel is important.

AI researchers are calling on one another to find a higher value than growth and technological advancement, but they are usually drawing those values from the very same humanistic perspectives that built the tech to begin with. It is of limited effectiveness to appeal to ethics in a socio-economic system that values growth over all things. Deep down we probably know this, which is why our nightmarish fantasies about the future of AI look very much like a manifestation of Moloch. A new god that cares nothing for us. A gnostic demon that has no connection to anything higher than domination of all life. A mad deity, that much like late stage capitalism, can see nothing beyond consumption.

In Meditations on Moloch, Scott Alexander states that ‘only another God can kill Moloch’. Which begs the question, where might we find that god today? Another way to put this is, “what could we appeal to that is so strong, so compelling that it spurs the kind of collective action and coordination needed to tackle the dangers of exponential technology?”

Another option is that we don’t. We set aside, for a moment, the question of what values we should build into tech, or what values we should appeal to in order to halt its disastrous progress. In fact, we forget about values and ethics entirely.

...

Real virtue comes not from performative values, but from an alignment with a deeper level of reality. Or, as Lao Tzu puts it, “The highest virtue is not virtuous and that is why it is virtuous.” In his excellent piece on Tolkein and C.S. Lewis, N.S. Lyons describes how both thinkers saw the seeds of our growing tech dystopia. And part of it, according to C.S Lewis, came from a misunderstanding of where values come from. Lewis wrote that “The human mind has no more power of inventing a new value than of imagining a new primary colour.”

Values, in the way C.S Lewis and theologians might define them, come from an alignment with true reality. They are not commodities for us to trade in a narcissistic quest for self-fulfillment, or things we can make up, choose from a pile and build into our technology.

From this perspective, real values come from beyond humans. From the divine. They are not something that can be quantified into ones and zeroes, but something that can be felt. Something that has quality.

Quality is probably the most important concept in this piece - but what it has to to do with AI isn’t immediately obvious. To see why it matters, we can look at the role an orientation toward quality plays, or doesn’t, in the metaphysical foundations of the West.

Let’s return to the lake, press rewind and go back in time. The farms vanish. Forest erupts on the shore. The jetty disappears. The sun rises and falls, rises and falls. Stop. We’ve reached the late 1600s, and the beginning of the cultural conditions that led to AI. It was between now and the early 1800’s that the Age of Enlightenment would radically change the world.

It was during this time that the things we take for granted now in the West – individual liberty, reasoned debate, and the scientific method – began to take shape. Our science, philosophy and conception of reality is still heavily influenced by Enlightenment thinking – specifically, by a French philosopher and mathematician named René Descartes. In the mid-1600s, Descartes sowed the intellectual seeds that would radically change how we view ourselves, our minds and matter. As John Vervaeke explains in Awakening from the Meaning Crisis:

[Descartes’] whole proposal is that we can render everything into equations, and that if we mathematically manipulate those abstract symbolic propositions, we can compute reality. Descartes saw in that a method for how we could achieve certainty, and that he understood the anxiety of his time as being provoked by a lack of certainty and the search for it, and this method of making the mind computational in nature would alleviate the anxiety that was prevalent at the time.

Just as we are living through the anxiety of the metacrisis and the threat of AI, Descartes was living through a time of great uncertainty and recognized the need for a new paradigm. What he is most famous for is ‘Cartesian dualism,’ the idea that mind and matter are separate. As a Catholic, he didn’t argue that the mind came from matter, simply that the individual mind was separated from matter.

He began a process that would move past merely separating mind and matter, and toward a worldview that saw only matter as real. A contemporary of Descartes, Thomas Hobbes, went further and suggested that thinking arose from small mechanical processes happening in the brain. In doing so, Vervaeke points out, he was laying the ground for artificial intelligence:

…what Hobbes is doing is killing the human soul! And of course that’s going to exacerbate the cultural narcissism, because if we no longer have souls, then finding our uniqueness and our true self, the self that we’re going to be true to, becomes extremely paradoxical and problematic. If you don’t have a soul, what is it to be true to your true self? And what is it that makes you utterly unique and special from the rest of the purposeless, meaningless cosmos?

If the metaphysical foundations of our society tell us we have no soul, how on earth are we going to imbue soul into AI? Four hundred years after Descartes and Hobbs, our scientific methods and cultural stories are still heavily influenced by their ideas.

Perhaps the most significant outcome of that is that we still don’t really understand the interaction between matter and mind. The most dominant theory about reality is still that matter is the only thing that’s real - physicalism. Physicalism can tell you what’s happening in the brain when you’re happy, for example, but it can’t tell you what it’s like to be you when you’re happy. It can quantify the code of AI, but it can’t posit what it might feel like to be an AI. Science has no idea what consciousness actually is. This is known as ‘the hard problem of consciousness.’ From a physicalist viewpoint, your experience is a byproduct of matter. Nothing more than an illusion. A ghost in a meat machine.

Physicalism has been successful in many ways, and ushered in vital advancements that most of us wouldn’t want to do away with. However, its implications might lead to the end of all life on earth, so we may do well to start questioning it. Daniel Schmachtenberger has spoken at length about the ‘generator functions’ of existential risk, in essence the deeper driving causes. Two of the most important he points to are ‘rivalrous dynamics [like competitive fish farming]’ and ‘complicated systems [like the economic system of fish farming] consuming their complex substrate [the lake]’. He argues that “We need to learn how to build closed loop systems that don’t create depletion and accumulation, don’t require continued growth, and are in harmony with the complex systems they depend on.”

But what is the generator function of those generator functions? I would argue it’s Physicalism. It has led to a cultural story in which it makes perfect sense to consume until we die, because our lived experience is supposedly an illusion generated by our brains. All the most important aspects of life, what it feels like to love, what it feels like to grieve, are secondary to things that can be quantified. It leaves us with a cold, dead world that isn’t worth saving, and that perhaps, deep down, we want to destroy.

...

So what does a conscious universe have to do with AI and existential risk? It all comes back to whether our primary orientation is around quantity, or around quality. An understanding of reality that recognises consciousness as fundamental views the quality of your experience as equal to, or greater than, what can be quantified.

Orienting toward quality, toward the experience of being alive, can radically change how we build technology, how we approach complex problems, and how we treat one another. It would challenge the Physicalist notion, and the technology birthed by it, that the quantity (of things) is more important than the quality of our experience. As Robert Pirsig, author of Zen and the Art of Motorcycle Maintenance, argued, a ‘metaphysics of quality’ would also open the door for ways of knowing made secondary by physicalism - the arts, the humanities, aesthetics and more - to contribute to our collective knowledge in a new way. This includes how we build AI.

It wouldn’t be a panacea. Many animistic cultures around the world have a consciousness-first outlook, and have still trashed their commons when faced with powerful incentives to do so. I don’t argue that a shift in our underlying metaphysics would magically stop us polluting every lake.

However, what it could do is provide us with a philosophical foundation on which we could build new cultural stories. New social games, new concepts around status, new incentive structures. Crucially, it would also change our conception of consciousness as a universal quality of reality rather than a byproduct of matter. In doing so, it opens up new ways to talk about AI being self-aware that could even lead to the birth of a new species of artificial intelligence.

When I asked Bernardo Kastrup what he thought the implications would be of a widespread adoption of an Idealist perspective, he answered:

- It changes everything. Our sense of plausibility is culturally enforced, and our internalized true belief about what’s going on is culturally enforced…. They calibrate our sense of meaning, our sense of worth, our sense of self, our sense of empathy. It calibrates everything. Everything. So, culture is calibrating all that, and it leads to problems if that cultural story is a dysfunctional one. If the dysfunctional story is the truth, then, OK, we bite the bullet. The problem is that we now have a dysfunctional story that is obviously not true…. If you are an idealist, you recognize that what is worthwhile to collect in life is not things. It’s insights…. Consumerism is an addictive pattern of behavior that tries to compensate for the lack of meaning enforced by our current cultural narrative…. We engage in patterns of addictive behavior, which are always destructive to the planet and to ourselves, in order to compensate for that culturally induced, dysfunctional notion of what’s going on.

(https://beiner.substack.com/p/reality-eats-culture-for-breakfast?)