AI Regulation
= "how to confront the deeper, structural problem: what kind of institutions does democratic society need to govern digital power with genuine accountability, and what design choices make those institutions durable enough to matter?" [1]
Typology
Naeema Zarif:
"The dominant approach to digital governance over the past decade has moved along roughly three tracks, each with genuine contributions and serious structural limitations.
The first is cybersecurity-centric governance: understanding digital risk primarily as a matter of technical resilience, threat detection, and incident response. The frameworks it produces are real and necessary, but viewing digital governance through a security lens fundamentally narrows what is recognized as a problem. It tends to frame the public as a vulnerable population to be protected from external threats rather than as rights-bearing citizens who should have meaningful say over how digital systems affect their lives. Lene Hansen and Helen Nissenbaum have shown how securitization logic, when applied to digital domains, creates a gravitational pull toward hypersecuritization, a tendency to magnify threats in ways that justify extraordinary measures and crowd out civil liberties considerations. When cybersecurity becomes the master frame, privacy, autonomy, and democratic oversight get re-positioned as secondary concerns, inconvenient frictions in an otherwise urgent technical project.
The second track is compliance-based governance: the GDPR model, broadly construed. Regulation through consent check-boxes, data protection officers, and breach-notification requirements has produced real improvements at the margins, but it has not resolved the underlying political economy of digital power. Julie Cohen's work makes the essential point: privacy law has been largely reconstructed to serve the interests of informational capitalism rather than democratic self-governance. Legal compliance becomes a legitimizing veneer rather than a substantive constraint. Companies invest in compliance architecture while continuing to extract value from behavioral data at scale. The problem, in other words, is not that we lack rules; it is that the rules are calibrated to manage liability rather than to protect rights.
The third track is narrow AI ethics: voluntary principles issued by companies, industry consortia, and sometimes governments. Brent Mittelstadt has argued that AI ethics guidelines lack the institutional scaffolding that makes ethical frameworks effective in professional domains like medicine or law: there are no fiduciary duties, no binding professional norms, no independent accountability mechanisms, no proven methods for translating principles into practice. Ben Wagner coined the term "ethics-washing" to describe what happens next: voluntary ethics frameworks become substitutes for binding regulation rather than complements to it. The more elaborate the ethics statement, the more effectively it pre-empts legislative action. What looks like moral seriousness turns out to be regulatory arbitrage."
(https://naeemazarif.substack.com/p/a-new-playbook-for-ethical-digital)
Discussion
Technology Policy approaches
Naeema Zarif:
"Technology policy is, almost everywhere, dominated by a narrow triangle of state agencies, legal departments, and technical experts, with corporate actors disproportionately present in all three. This concentration is not politically neutral. Philip Pettit’s republican theory of freedom, grounded in the principle of non-domination, holds that what threatens human freedom is not merely actual interference but the capacity for arbitrary interference, the condition of being subject to power that operates without effective contestation. Digital governance structured around expert panels and compliance teams, insulated from public challenge, reproduces precisely this structure of unchallengeable power at scale.
The alternative is not a naive faith in crowd-sourced decision-making, but something more rigorous: structured inclusion. Sheila Jasanoff's concept of "technologies of humility" proposes systematic methods for governing science and technology that center on framing, vulnerability, distribution, and learning, methods that take seriously what experts do not and cannot know, and that create space for the perspectives of those who will bear the costs of technological choices. Archon Fung's work on empowered participatory governance demonstrates through sustained empirical research that well-designed participatory institutions do not simply add legitimacy, they generate better solutions, especially for complex problems where affected communities have knowledge that specialists lack.
The growing movement toward citizens' assemblies on technology policy is one promising expression of this logic. The Belgian Citizens' Panel on AI in 2024, organized during Belgium's EU Council Presidency, represented the first citizens' assembly specifically focused on AI governance in a European presidency context. Broader than a stakeholder consultation, it brought together randomly selected citizens to deliberate over AI policies after sustained engagement with evidence. The OECD has documented nearly 600 citizens' assemblies worldwide, describing a "deliberative wave" of democratic innovation. These are not replacements for legislative or regulatory authority; they are inputs into governance processes that currently lack channels for genuine public reasoning.
The stakes are especially high in fragile, conflict-affected, and post-conflict settings, where digital governance is often weakest and the consequences of getting it wrong most severe. Civic technology projects that were genuinely built with communities, not merely deployed at them, have shown what is possible. Ushahidi, born from Kenya’s post-election violence in 2007–2008, pioneered crowdsourced crisis mapping precisely because communities were both the information source and the intended beneficiary. Over ninety thousand deployments across a hundred and sixty countries followed. The lesson was not that the technology was novel; it was that the design relationship was different. Tools built through genuine partnership carry a different kind of social authority than tools imposed by institutional decree.
The deeper principle at work here is that governance derives its legitimacy not only from formal authorization (a law passed, a regulation adopted, a mandate conferred) but from ongoing social alignment: the active assent of those it governs, maintained through processes they recognize as fair, accessible, and responsive to their interests. A governance framework that is technically authorized but socially disconnected is not, in any meaningful sense, accountable. It is merely empowered."
(https://naeemazarif.substack.com/p/a-new-playbook-for-ethical-digital)
Accountability Frameworks
Naema Zarif:
"Accountability is perhaps the most overused and under-specified concept in governance discourse. Everyone supports it; far fewer are willing to specify what it actually requires.
Mark Bovens’s analytical framework distinguishes between accountability as a virtue (meaning someone is conscientious and responsible) and accountability as a mechanism: a structured relationship in which actors are obliged to explain and justify their conduct to a forum with the authority to evaluate it and impose consequences. The virtue is easy to claim. The mechanism is what actually constrains behavior. Much of what passes for accountability in digital governance is virtue theater (mission statements, ethics officers, responsible AI teams) without the structural mechanism of independent evaluation, meaningful transparency, and effective redress. What would the mechanism actually look like? Three requirements stand out.
First, intelligibility: people must be able to understand, at least in principle, how decisions affecting them are being made. Frank Pasquale's work on algorithmic opacity remains clarifying here, the claim is not that citizens need to read source code, but that the logic, data, and assumptions embedded in consequential automated systems should be contestable by those with the standing and expertise to challenge them.
Danielle Citron's concept of technological due process extends this further: automated systems that make or substantially influence decisions about welfare, employment, housing, and public safety ought to be subject to something analogous to administrative due process, not because they are legal judgments in a formal sense, but because they function as such.
Second, effective challenge: intelligibility without recourse is insufficient. Virginia Eubanks's research on automated poverty management documents how algorithmic systems affecting some of the most vulnerable populations in the United States operated with minimal transparency and effectively no viable appeal mechanisms. When errors occurred (wrongful termination of benefits, misclassification of risk) the burden of proof fell entirely on individuals who lacked both the information and the resources to mount a credible challenge. The EU AI Act, the world's first comprehensive AI regulatory framework, introduces mandatory fundamental rights impact assessments for high-risk applications, a significant structural advance over purely technical compliance. But enforcement remains untested, and the act's risk-based architecture depends heavily on adequate implementation by national authorities that vary enormously in capacity.
Third, what Onora O'Neill calls "intelligent accountability" a design principle that is often missed entirely. Complex, legalistic accountability systems can themselves damage trust by creating perverse incentives: gaming of metrics, compliance theater, diversion of energy from substantive work into documentation. Well-designed accountability measures should support trustworthiness and sound judgment rather than substitute for them. Accountability structures that reduce institutional actors to box-checkers produce neither better outcomes nor genuine public confidence.
The Government of Canada’s Directive on Automated Decision-Making, which introduced a mandatory algorithmic impact assessment tool across federal departments in 2020, offers an instructive case of trying to combine all three elements: a structured self-assessment process, proportionate mitigation requirements calibrated to assessed risk levels, and mandatory human oversight for high-impact decisions. It is imperfect and still evolving. But it has the important architectural feature of being embedded in administrative law rather than voluntary corporate policy, which means it is, at least in principle, enforceable."
(https://naeemazarif.substack.com/p/a-new-playbook-for-ethical-digital)