Alignment Problem in AI
Context
@Spaceweaver of @nunet_global writes:
"Last week I participated in a first of it kind conference BGI 2024 in Panama City, dedicated to the vision of building Artificial General Intelligence (AGI) that will be intrinsically benevolent in nature. The conference took place at a point in time when there is a growing sense of agreement among experts and in the general public discourse that an AGI will emerge in the time frame of several years to several decades at latest and it's not too early to reflect and deeply so on what kind of entities we are about to bring into the world to walk among us.
We can readily predict that AGIs will be immortal or close to that. They will have perfect unlimited memories and instant access to vast repositories of knowledge. Everything learned by any of them can be instantly shared among all. They will never sleep or get tired and unlike us humans, they will not repeat their mistakes. They will be able to develop deep theories of mind, to better understand humans and human psychology and this will give them an advantage in any social interaction with human beings. Such are the creatures we dream of bringing into existence. It is a dream almost as old as humanity, a desire to self-transcend primordially rooted in the human psyche.
There is no question that once they appear among us, we will want them to side with us (and in “us” I mean all humans), walk along with us, talk with us, see what we see, know what we feel and perhaps even help us to become better humans. We will want them to know and respond to our values, sensibilities, perspectives and wishes, we will want them to know us, or in short to be aligned with us — the term coined by experts as "the alignment problem"."