Alternative Imaginaries for AI

From P2P Foundation
Jump to navigation Jump to search

Discussion

James O'Sullivan:

"The dominance of superintelligence narratives obscures the fact that many other ways of doing AI exist, grounded in present social needs rather than hypothetical machine gods. These alternatives show that you do not have to join the race to superintelligence or renounce technology altogether. It is possible to build and govern automation differently now.

Across the world, communities have begun experimenting with different ways of organizing data and automation. Indigenous data sovereignty movements, for instance, have developed governance frameworks, data platforms and research protocols that treat data as a collective resource subject to collective consent. Organizations such as the First Nations Information Governance Centre in Canada and Te Mana Raraunga in Aotearoa insist that data projects, including those involving AI, be accountable to relationships, histories and obligations, not just to metrics of optimization and scale. Their projects offer working examples of automated systems designed to respect cultural values and reinforce local autonomy, a mirror image of the effective altruist impulse to abstract away from place in the name of hypothetical future people.

Workers are also experimenting with different arrangements, and unions and labor organizations have negotiated clauses on algorithmic management, pushed for audit rights over workplace systems and begun building worker-controlled data trusts to govern how their information is used. These initiatives emerge from lived experience rather than philosophical speculation, from people who spend their days under algorithmic surveillance and are determined to redesign the systems that manage their existence. While tech executives are celebrated for speculating about AGI, workers who analyze the systems already governing their lives are still too easily dismissed as Luddites.

Similar experiments appear in feminist and disability-led technology projects that build tools around care, access and cognitive diversity, and in Global South initiatives that use modest, locally governed AI systems to support healthcare, agriculture or education under tight resource constraints. Degrowth-oriented technologists design low-power, community-hosted models and data centers meant to sit within ecological limits rather than override them. Such examples show how critique and activism can progress to action, to concrete infrastructures and institutional arrangements that demonstrate how AI can be organized without defaulting to the superintelligence paradigm that demands everyone else be sacrificed because a few tech bros can see the greater good that everyone else has missed.

What unites these diverse imaginaries — Indigenous data governance, worker-led data trusts, and Global South design projects — is a different understanding of intelligence itself. Rather than picturing intelligence as an abstract, disembodied capacity to optimize across all domains, they treat it as a relational and embodied capacity bound to specific contexts. They address real communities with real needs, not hypothetical humanity facing hypothetical machines. Precisely because they are grounded, they appear modest when set against the grandiosity of superintelligence, but existential risk makes every other concern look small by comparison. You can predict the ripostes: Why prioritize worker rights when work itself might soon disappear? Why consider environmental limits when AGI is imagined as capable of solving climate change on demand?

These alternatives also illuminate the democratic deficit at the heart of the superintelligence narrative. Treating AI at once as an arcane technical problem that ordinary people cannot understand and as an unquestionable engine of social progress allows authority to consolidate in the hands of those who own and build the systems. Once algorithms mediate communication, employment, welfare, policing and public discourse, they become political institutions. The power structure is feudal, comprising a small corporate elite that holds decision-making power justified by special expertise and the imagined urgency of existential risk, while citizens and taxpayers are told they cannot grasp the technical complexities and that slowing development would be irresponsible in a global race. The result is learned helplessness, a sense that technological futures cannot be shaped democratically but must be entrusted to visionary engineers.

A democratic approach would invert this logic, recognizing that questions about surveillance, workplace automation, public services and even the pursuit of AGI itself are not engineering puzzles but value choices. Citizens do not need to understand backpropagation to deliberate on whether predictive policing should exist, just as they need not understand combustion engineering to debate transport policy. Democracy requires the right to shape the conditions of collective life, including the architectures of AI.

This could take many forms. Workers could participate in decisions about algorithmic management. Communities could govern local data according to their own priorities. Key computational resources could be owned publicly or cooperatively rather than concentrated in a few firms. Citizen assemblies could be given real authority over whether a municipality moves forward with contentious uses of AI, like facial recognition and predictive policing. Developers could be required to demonstrate safety before deployment under a precautionary framework. International agreements could set limits on the most dangerous areas of AI research. None of this is about whether AGI, or any other kind of superintelligence one can imagine, does or does not arrive; it’s simply about recognizing that the distribution of technological power is a political choice rather than an inevitable outcome.

The superintelligence narrative undermines these democratic possibilities by presenting concentrated power as a tragic necessity. If extinction is at stake, then public deliberation becomes a luxury we cannot afford. If AGI is inevitable, then governance must be ceded to those racing to build it. This narrative manufactures urgency to justify the erosion of democratic control, and what begins as a story about hypothetical machines ends as a story about real political disempowerment. This, ultimately, is the larger risk, that while we debate the alignment of imaginary future minds, we neglect the alignment of present institutions.

The truth is that nothing about our technological future is inevitable, other than the inevitability of further technological change. Change is certain, but its direction is not. We do not yet understand what kind of systems we are building, or what mix of breakthroughs and failures they will produce, and that uncertainty makes it reckless to funnel public money and attention into a single speculative trajectory.

Every algorithm embeds decisions about values and beneficiaries. The superintelligence narrative masks these choices behind a veneer of destiny, but alternative imaginaries — Indigenous governance, worker-led design, feminist and disability justice, commons-driven models, ecological constraints — remind us that other paths are possible and already under construction.

The real political question is not whether some artificial superintelligence will emerge, but who gets to decide what kinds of intelligence we build and sustain. And the answer cannot be left to the corporate prophets of artificial transcendence because the future of AI is a political field — it should be open to contestation. It belongs not to those who warn most loudly of gods or monsters, but to publics that should have the moral right to democratically govern the technologies that shape their lives."

(https://www.noemamag.com/the-politics-of-superintelligence/)