Nick Bostrom on the Paper Clip Maximizer as an Analogy for the Dangers of AI

From P2P Foundation
Jump to navigation Jump to search

Discussion

Thomas Steininger:

"what is reflected back to us depends on what the parabolic mirror is aligned with. This in turn depends on which algorithms are used and how these algorithms develop as the AI ​​interacts with new information and learns itself. For example, the technicians were surprised when the chatbots started learning new languages ​​on their own. So you can imagine how each generation of AI teaches something to the next - so what algorithms are they being taught and used to teach future generations?

This raises the question about the direction of artificial intelligence: How do we ensure that AI, with its growing power and self-direction, preserves and protects human life and the biosphere? How can we ensure that their values ​​and goals align with ours?

Philosopher Nick Bostrom has addressed the existential risk to complex life posed by AI “misadaptation.” One of his most famous thought experiments concerns the “paperclip maximizer.” Suppose a superintelligent AI, with access to all knowledge of science and engineering, is given a simple command from a paperclip manufacturer to maximize the production of paperclips using all available materials. Bostrom argues that this could lead to the destruction of all life. The AI ​​could start with the steel found in the factory and then consume the steel available worldwide. If that runs out, she could try combining iron and carbon to make steel for paper clips. Since humans and other life forms are made of iron and carbon, the AI ​​could come up with the "idea" to "break down" all of these life forms to get iron and carbon to make steel and maximize the production of paper clips.

The “paperclip maximizer” may be a far-fetched example, but it shows the unintended consequences of seemingly simple instructions. Bostrom has another example that is even more troubling: program AI to make everyone happy, with a smile on their face. That may be a worthy goal, but under the control and influence of a super-intelligent AI that doesn't care about humans, the outcome may be dismal.

What will be crucial is the perspective from which artificial intelligence will reflect our collective spirit back to us. Will it be designed to fool us or will it be designed to make wisdom and insight more accessible?"

(https://www.evolve-magazin.de/instrumente-des-heiligen/)

[Category:Existential Risk]]