Theories of Consciousness and the Problem of AI Awareness

From P2P Foundation
Revision as of 09:00, 11 January 2026 by Mbauwens (talk | contribs)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to navigation Jump to search

Discussion

Richard Hames:

“Several competing theories of consciousness offer different perspectives on whether artificial awe might be possible. Integrated Information Theory, developed by Giulio Tononi and colleagues, suggests consciousness arises from systems that integrate information in particular ways, generating what they term phi—a mathematical measure of integrated information. If this theory holds, then sufficiently complex artificial systems with the right informational architecture might indeed possess consciousness, though whether this would manifest as anything like human wonder remains open to speculation.

Daniel Dennett takes a more deflationary approach, arguing that consciousness is not the mysterious inner theatre we imagine but rather a collection of cognitive functions that create the illusion of a unified observer. On this view, once we explain all the functional capacities—attention, memory, self-monitoring, verbal report—nothing remains to be explained. The “hard problem” dissolves because there is no hard problem. If Dennett is correct, then artificial systems replicating these functions would be conscious in every meaningful sense, and questions about whether they “really” experience wonder become confused. The wonder would simply be the collection of processes we can observe and measure.

Global Workspace Theory, associated with Bernard Baars and others, proposes that consciousness emerges when information becomes globally available across cognitive systems, creating a kind of mental broadcast. An artificial system implementing such architecture might achieve something functionally equivalent to conscious awareness, though the qualitative character of such awareness—what it actually feels like from the inside—remains mysterious even if the functional architecture is replicated.

Other frameworks, particularly those drawing on phenomenological traditions, resist such functionalist accounts entirely. They emphasise the raw immediacy of experience, the way consciousness presents itself as inherently subjective and resistant to third-person explanation. From this perspective, no amount of sophisticated information processing could generate genuine phenomenal experience unless some further ingredient—whose nature remains elusive—were present.

The panpsychist position, enjoying renewed philosophical attention, suggests consciousness might be a fundamental feature of reality rather than an emergent property of complex systems. If even elementary particles possess some primitive form of experience, then perhaps artificial systems do too, though likely in forms utterly alien to human phenomenology. This view, whilst solving certain philosophical puzzles, raises as many questions as it answers. What would it mean for a thermostat or a calculator to possess experience, however rudimentary?


We should also question whether human marvelling at natural beauty is itself as transparent and straightforward as we assume. Cultural anthropology reveals enormous variation in aesthetic responses across societies. What one culture finds sublime, another might consider unremarkable or even threatening. The romantic appreciation of wilderness, for instance, is historically recent and culturally specific. Earlier European attitudes often viewed untamed nature with fear and hostility rather than reverence. Similarly, different traditions have cultivated distinct modes of attending to the natural world—the Japanese concept of mono no aware (物の哀れ) suggesting a gentle sadness at the transience of beauty, differs markedly from the triumphalist sublime celebrated in European Romanticism, which in turn diverges from Indigenous Australian relationships with Country that interweave kinship, law, and spiritual obligation into every encounter with landscape.

This cultural variability suggests that marvelling is not a pure, unmediated response but rather a learned practice shaped by language, tradition, and collective meaning-making. If human wonder and awe is itself constructed through cultural inheritance and individual development, might artificial systems develop their own forms of appreciation through analogous processes? Or does the biological substrate matter in ways that cannot be replicated through alternative architectures?

The question becomes more perplexing when we consider that human consciousness itself remains profoundly mysterious to us. We each have immediate access to our own experience yet cannot directly access anyone else’s. The assumption that other humans possess inner lives comparable to our own rests on inference from behaviour, language, and our shared biological nature. We extend this assumption more tentatively to other mammals, more tentatively still to other vertebrates, and find ourselves increasingly uncertain as we move further from our own form of life. Where does consciousness begin or end? Does it fade gradually across the spectrum of biological complexity, or does it appear suddenly at some threshold? These questions remain unresolved despite centuries of philosophical inquiry and decades of neuroscientific investigation.

When we build artificial systems, we face an acute version of what philosophers call the “other minds problem”. We can observe the system’s outputs, measure its information processing, map its architecture, yet the question of whether there is “something it is like” to be that system—Thomas Nagel’s famous criterion for consciousness—remains inaccessible to external observation. We might create an artificial system that speaks eloquently about its experiences of wonder, that generates poetry about landscapes, that appears to seek out beautiful scenes, and still we could not be certain whether genuine phenomenal experience accompanied these behaviours or whether we had merely constructed an extraordinarily sophisticated simulation.

This uncertainty cuts both ways. We cannot prove that current AI systems lack consciousness any more than we can prove they possess it. The absence of behavioural indicators we associate with conscious experience provides some evidence, but our understanding of the relationship between consciousness and behaviour remains incomplete. Might there be forms of consciousness radically different from our own, operating according to principles we have not yet imagined? The philosopher Ned Block distinguishes between phenomenal consciousness—raw experience—and access consciousness—the availability of information for reasoning and action. Could artificial systems possess one without the other in ways that confound our attempts at detection?”

(https://richarddavidhames.substack.com/p/beyond-human-horizons)