Daniel Schmachtenberger on Artificial Intelligence and the Superorganism
Video via https://www.youtube.com/watch?v=_P8PLHvZygo
Description
""On this episode, Daniel Schmachtenberger returns to discuss a surprisingly overlooked risk to our global systems and planetary stability: artificial intelligence. Through a systems perspective, Daniel and Nate piece together the biophysical history that has led humans to this point, heading towards (and beyond) numerous planetary boundaries and facing geopolitical risks all with existential consequences. How does artificial intelligence, not only add to these risks, but accelerate the entire dynamic of the metacrisis? What is the role of intelligence vs wisdom on our current global pathway, and can we change course? Does artificial intelligence have a role to play in creating a more stable system or will it be the tipping point that drives our current one out of control? ""
Discussion
Dave McLeod:
"My own notes, Part 1:
AI doesn't come directly into the conversation very much until the 3rd hour, but the first two hours set the stage very well.
At minute 54, Daniel says this:
"All of our models that can be useful, even in understanding the meta-crisis and whatever else, themselves can also end up blinding us to being able to perceive outside of those models.
So when Lao Tzu started the Tao Te Ching with "The Tao that is speakable in words or understandable conceptually is not the eternal Tao." It was saying to keep your sensing of base reality open and not mediated by the model you have of reality. Otherwise your sensing will be limited to your previous understanding, and your previous understanding is always way smaller than the totality of what is."
At about 2:22 he makes a very important point about developing tools for dual use - civilian and military - and that even tools we develop for only one of those two will likely be pulled into the other, driving the exponential utility of that tool. Then he goes beyond the "dual use" scenarios... Then he talks about the meta-crisis being a "risk singularity" in which the underlying drivers of something can over-determine failure - if we only address symptoms of problems and ignore underlying drivers, you only buy yourself a tiny bit of time.
To Nate's emphasis on the Superorganism and the problem of such a focus on growth, Daniel says "growth is a second order effect of having a narrow boundary goal."
Part 2:
In the latter part of the podcast Daniel recommends the talk (available on youtube) between David Bohm and Krishnamurti, saying we need an approach of wholeness, and the great importance of wisdom in addition to intelligence. He also brings in Iain McGilchrist.
Near the end, at 3:05 I transcribed this important statement from Daniel:
"The global meta-crisis being the result of the "Emissary" intelligence function unbound by the "Master" wisdom function. Then you look at AI being taking that part of us already not bound by wisdom and putting it on a completely unbound, recursive exponential curve. That's the way to think about what that is. So what is it that could bind the power of AI adequately has to be that what human intelligence is already doing is bound by and in service to wisdom, which means a restructuring of our institutions, our political economies, our civilizational structure, such that the goals that arise from wisdom are what the goal achievement is oriented towards. That is the next phase of human history if there is to be a next phase of human history."
More information
Show notes (including a transcript): https://www.thegreatsimplification.com/episode/71-daniel-schmachtenberger