Artificial General Intelligence

From P2P Foundation Wiki
Jump to navigation Jump to search

Context

"Income increasingly flows to those who own computational resources rather than those who provide labor."

Sueyon Kim writes:

"Professor Restrepo defines AGI as “a state in which all economically valuable work currently performed by humans can be accomplished using computational resources.” AGI thus represents more than technological superiority in specific domains—it marks a critical inflection point where algorithms and computing power combine to replace production activities across the entire economy. Restrepo projects that the drivers of economic growth will shift from population expansion and labor inputs to the rate at which computational resources scale.

This transformation transcends mere technological progress—it constitutes a fundamental economic restructuring. While industrial-era productivity gains emerged from labor force expansion, AGI- era growth derives from processing larger datasets and scaling computational capacity. As long as computational capacity keeps expanding, growth can continue in spite of population decline.

This economic realignment fundamentally transforms the determination of wages. Compensation no longer reflects human productivity but rather the cost of replication—the expense of performing identical work with artificial intelligence. Any given occupation must prove that it is difficult to replace with AI in order to retain substantial economic value.

Consequently, income increasingly flows to those who own computational resources rather than those who provide labor."

(Source: Taejae Future Consensus Institute <World Research Trend> Vol.42 I 2025-11-21)


Description

"The term AGI is meant to distinguish itself from tradition AI or narrow AI, of the preprogrammed and austere variety. Cruise control and chess playing software are examples of narrow AI. The goal of General AI is to create a thinking machine, one that can understand patterns in the world and in itself, while learning and acting accordingly. Ben Goertzel's et al. work in the company, Novamente, has shown progress toward this goal, teaching virtual pets in Second Life to play fetch using observation and other learning techniques. Goertzel's work is currently at the infantile stage, described as an autonomous agent with simple associations between words and objects, actions and images, and the basic notions of time, space, and causality.[2] Once AGI is at the intellectual level of a Da Vinci or Einstein and beyond it can then be taught and learn virtually anything a human can more effectively and accurately than a human ever could. To say that AGI, once developed, could teach others a thing or two, may be a significant understatement. Of recent, this discipline has displayed more substantial involvement. The first conference on Artificial General Intelligence attracted over a hundred developers, presenters, and enthusiasts to discuss the many aspects of the field—a strong signifier of what is to come. In an interview with Ben Goertzel at a Singularity Institute conference, it was mentioned that an AGI could be produced in as little as five years, given that a concerted effort is made."

(http://www.effortlesseconomy.com/)


Discussion

The Difficulty of Defining AGI

Eddy Keming Chen, Mikhail Belkin et al. :

"We assume, as we think Turing would have done, that humans have general intelligence. Some think that general intelligence does not exist at all, even in humans. Although this view is coherent and philosophically interesting, we set it aside here as being too disconnected from most AI discourse. But having made this assumption, how should we characterize general intelligence?

A common informal definition of general intelligence, and the starting point of our discussions, is a system that can do almost all cognitive tasks that a human can do6,7. What tasks should be on that list engenders a lot of debate, but the phrase ‘a human’ also conceals a crucial ambiguity. Does it mean a top human expert for each task? Then no individual qualifies — Marie Curie won Nobel prizes in chemistry and physics but was not an expert in number theory. Does it mean a composite human with competence across the board? This, too, seems a high bar — Albert Einstein revolutionized physics, but he couldn’t speak Mandarin.


How close is AI to human-level intelligence?

A definition that excludes essentially all humans is not a definition of general intelligence; it is about something else, perhaps ideal expertise or collective intelligence. Rather, general intelligence is about having sufficient breadth and depth of cognitive abilities, with ‘sufficient’ anchored by paradigm cases. Breadth means abilities across multiple domains — mathematics, language, science, practical reasoning, creative tasks — in contrast to ‘narrow’ intelligences, such as a calculator or a chess-playing program. Depth means strong performance within those domains, not merely superficial engagement.

Human general intelligence admits degrees and variation. Children, average adults and an acknowledged genius such as Einstein all have general intelligence of varying level and profile. Individual humans excel or fall short in different domains. The same flexibility should apply to artificial systems: we should ask whether they have the core cognitive abilities at levels comparable to human-level general intelligence.

Rather than stipulating a definition, we draw on both actual and hypothetical cases of general intelligence — from Einstein to aliens to oracles — to triangulate the contours of the concept and refine it more systematically. Our conclusion: insofar as individual humans have general intelligence, current LLMs do, too.


  • What general intelligence isn’t

We can start by identifying four features that are not required for general intelligence.

Perfection. We don’t expect a physicist to match Einstein’s insights, or a biologist to replicate Charles Darwin’s breakthroughs. Few, if any, humans have perfect depth even within specialist areas of competence. Human general intelligence does not require perfection; neither should AGI.

Universality. No individual human can do every cognitive task, and other species have abilities that exceed our own: an octopus can control its eight arms independently; many insects can see parts of the electromagnetic spectrum that are invisible to humans. General intelligence does not require universal mastery of these skills; an AGI does not need perfect breadth.

Human similarity. Intelligence is a functional property that can be realized in different substrates — a point Turing embraced in 1950 by setting aside human biology1. Systems demonstrating general intelligence need not replicate human cognitive architecture or understand human cultural references. We would not demand these things of intelligent aliens; the same applies to machines.

Superintelligence. This is generally used to indicate any system that greatly exceeds the cognitive performance of humans in almost all areas. Superintelligence and AGI are often conflated, particularly in business contexts, in which ‘superintelligence’ often signals economic disruption. No human meets this standard; it should not be a requirement for AGI, either.


* A cascade of evidence

What, then, is general intelligence? There is no ‘bright line’ test for its presence — any exact threshold is inevitably arbitrary. This might frustrate those who want exact criteria, but the vagueness is a feature, not a bug. Concepts such as ‘life’ and ‘health’ resist sharp definition yet remain useful; we recognize paradigm cases without needing exact boundaries. Humans are paradigm examples of general intelligence; a pocket calculator lacks it, despite superhuman ability at calculations.

When we assess general intelligence or ability in other humans, we do not attempt to peer inside their heads to verify understanding — we infer it from behaviour, conversation and problem-solving. No single test is definitive, but evidence accumulates. The same applies to artificial systems.

Just as we assess human general intelligence through progressively demanding tests, from basic literacy to PhD examinations, we can consider a cascade of increasingly demanding evidence that warrants progressively higher confidence in the presence of AGI.

Turing-test level. Markers comparable to a basic school education: passing standard school exams, holding adequate conversations and performing simple reasoning. A decade ago, meeting these might have been widely accepted as sufficiently strong evidence for AGI.

The original prop of the HAL 9000 red light robot from the Stanley Kubrick film 2001: A Space Odyssey. Current AIs are more broadly capable than the science-fiction supercomputer HAL 9000 was.Credit: Hethers/Shutterstock

Expert level. Here, the demands escalate: gold-medal performance at international competitions, solving problems on PhD exams across multiple fields, writing and debugging complex code, fluency in dozens of languages, useful frontier research assistance as well as competent creative and practical problem-solving, from essay writing to trip planning. These achievements exceed many depictions of AGI in science fiction. The sentient supercomputer HAL 9000, from director Stanley Kubrick’s 1968 film 2001: A Space Odyssey, exhibited less breadth than current LLMs do. And current LLMs even exceed what we demand of humans: we credit individual people with general intelligence on the basis of much weaker evidence.

Superhuman level. Revolutionary scientific discoveries and consistent superiority over leading human experts across a range of domains. Such evidence would surely allow no reasonable debate about the presence of general intelligence in a machine — but it is not required evidence for its presence, because no human shows this."

(https://www.nature.com/articles/d41586-026-00285-6)

Two Transition Pathways: Gradual Adjustment Versus Sudden Disruption

Suyeon Kim:

"Professor Restrepo distinguishes between two scenarios for the AGI transition. The first is the “compute-binding transition” in which algorithms exist, but a lack of computational resources prevents full automation. Under this scenario, automation proceeds gradually, giving workers time to transition to new occupations.

The second is the “algorithm-binding transition” in which there is an abundance of computational resources, but algorithms for specific tasks remain undeveloped. When algorithmic breakthroughs occur, wages in affected sectors collapse abruptly and jobs disappear. Labor markets move discontinuously and in a volatile manner, and entire occupational categories could become obsolete overnight, which drives a rapid expansion in inequality.

Societies with a rapid technology adoption rate, such as Korea, have a higher probability of following the latter pathway. This calls for preemptive policy development and institutional frameworks that are capable of matching the pace of technological change—a warning that demands continued emphasis.


Computational Resource Owners Will Capture Most Income: Universal Basic Income and Other Fundamental Measures Required

On Restrepo, continued:

"The projection that labor’s share of income will approach zero transcends mere statistical observation—it spells wholesale economic restructuring. Professor Restrepo treats computational resources as new factors of production, comparable to land or capital. In AGI economies, growth depends on the speed at which computational capacity accumulates, and whoever owns these resources will capture the lion’s share of income. In other words, future wealth distribution hinges on who controls infrastructure such as GPUs, semiconductors, and data centers.

Restrepo proposes Universal Basic Income and public ownership of computational resources as tentative solutions. One approach involves taxing computational infrastructure revenues and redistributing these funds across society, while the other treats computational resources as public goods under collective governance. These are categorically different from conventional welfare policies, and can be viewed as a kind of institutional architecture for a new economic order."

(Taejae Future Consensus Institute World Trend Research vol42_251121)

More Information

  • Pascual Restrepo, “We Won’t Be Missed: Work and Growth in the AGI World,” in Ajay K. Agrawal, Anton Korinek, and Erik Brynjolfsson (eds.), The Economics of Transformative AI (University of Chicago Press, 2025), chap. 9. This research was presented at the NBER “The Economics of Transformative AI” conference in September 2025, subsequently published as NBER Working Paper No. 34423 in October of the same year, and is scheduled for inclusion as Chapter 9 in The Economics of Transformative AI to be published by University of Chicago Press.


More:

  1. Artificial General Intelligence Research Institute, at http://www.agiri.org/wiki/Main_Page
  2. Interview with Ben Goertzel at http://www.singinst.org/media/interviews/bengoertzel
  3. Open Cognition Project