Democratic AI

From P2P Foundation
Jump to navigation Jump to search

Characteristics

By Vasilis Kostakis and Aristotle Tympas:

"Democratic AI requires four foundations:

Open source

Models must be open so researchers and citizens can examine them and identify problems.


Public funding

AI research must serve the common good, not private profit. Funding must flow directly to communities developing AI for social needs; not just universities producing papers, but projects maintaining actual tools people use. Currently, even widely-used open-source AI projects struggle to secure ongoing support (Bernstein & Crowley, 2022). We need sustained funding for community-controlled infrastructure: local model registries, shared computing cooperatives, and commons-based training programmes enabling communities to develop, deploy, and govern their own systems.


Democratic control

Decisions about AI use must emerge from transparent processes, not closed corporate boards. Those who control the "means of prediction"– data, computational infrastructure, and expertise – wield power comparable to historical control over means of production (Kasy, 2025). Policy must create space for commons governance. Rather than regulation designed for corporate actors, we need legal frameworks recognising community ownership. Data sovereignty provisions should enable communities to control how their data trains models. Procurement rules should preference genuinely commons-governed projects where communities retain democratic control, not just "open-source" models corporations increasingly co-opt.


Social ownership

AI tools and infrastructures must belong to communities, not monopolies. The objectives encoded into AI systems ultimately reflect the priorities of those controlling the means of prediction (Kasy, 2025). When algorithms determine who gets hired, receives medical care, or sees which news, prioritising profit over social welfare produces predictable harms – from discriminatory housing loan denials to platforms optimising for engagement through anger and anxiety (Kasy, 2025). Models like the GovAI Coalition – where hundreds of US government bodies collectively set open-source AI procurement standards – show how collective institutions can establish standards serving public interests (Sharma & Adler, 2024). However, such coalitions must extend beyond governments to include workers, civil society, and affected communities as equal decision-makers. Community-controlled data cooperatives, operating democratically and transparently, offer an alternative to both corporate enclosure and state surveillance."

(https://policyreview.info/articles/news/ai-commons/2055)