How to really democratize AI
The development of AI must not be left solely in the hands of the big tech corporations. To harness its potential in complex and divided democracies, we rely on vigilant journalists.
By Dr. Annette Zimmermann, Assistant Professor of Philosophy & Affiliate Professor of Statistics at the University of Wisconsin-Madison. On 14 July, she gave a talk at Publix.

The public debate about artificial intelligence is dominated by extremes. On one side, techno-optimists hyperfocus on AI’s transformative potential; on the other, doomers warn of apocalyptic risk. Both perspectives easily lend themselves to flashy think pieces and sales pitches. But they ultimately distract us from a deeper urgent issue in the AI space: the danger that we, as citizens, end up sleepwalking into ceding control of AI entirely to unelected and unaccountable private-sector actors.
Big Tech firms like to claim (implausibly) that they are automatically ‘aligned with democratic values’ simply on the grounds that they are operating on a territory in which democratic procedures are in place more generally. Meanwhile, the same kinds of actors routinely gatekeep actual decision power over AI’s direction and purpose in our society—and many public officials, misdirected by AI hype and doom, let them. To many, it seems that letting Big Tech make unilateral decisions about when and how deploy a never-ending slew of new AI tools is justified on the grounds that only they possess the technical know-how, resources, and speed to compete with non-democratic actors, and to thus win the ‘AI arms race’ on our behalf.
If we care about democratic legitimacy, the fundamental question is not just how to build safe and ‘democratically aligned’ AI (presumably while not unduly burdening benevolent tech broligarchs with too many pesky guardrails), but how to actually increase direct democratic control over AI in practice. When it comes to pursuing this goal effectively in complex and divided democracies, truth-seeking journalists and activists are the essential catalyst.
When writing my forthcoming book titled Democratizing AI, I kept encountering a common objection: the concern that democracies are often not great at making decisions. Democratic crisis and conflict are worsening. Democratic processes are vulnerable to misinformation and nefarious forms of influence. Democratic constituencies tend to move slowly, but they can capriciously change course, and—especially in polarized societies—they might be prone to ignoring scientific and technical expertise. Why should we trust such imperfect institutions with control over the most powerful and complex technology?
Yes, democratic processes and institutions are highly imperfect; but nondemocratic control by corporate boards and tech elites is worse. The current crises of democracy are real: it would be naive to deny this. However, these problems are not a good enough reason to give up on the goal of democratic control, they are a reason to double down.
Democratic systems rely on a vibrant civil society to mediate, translate, and challenge official narratives. When people say “democracy fails,” they are often pointing to breakdowns in this intermediary layer: newsrooms without investigative capacity, activist networks without resources, publics that lack access to relevant facts. The solution is not to sideline democracy in the name of efficiency and innovation. Instead, we need to invest in emboldening and repairing the institutions that make it function. That includes strengthening the capacity of civil society actors to scrutinize AI systems and those who build them, and to reimagine the direction in which we as a society steer innovation.
A second possible objection takes a different form: democratic control over AI may well be desirable; but it is impossible to achieve. After all, AI development is already captured by a small group of firms; most people (and indeed, many elected officials!) do not understand the systems in question.
Again, these are reasonable concerns. My worry, though, is that this fatalism is self-fulfilling. If the public remains passive, power will remain where it is. But if journalists and activists increase public awareness of, and engagement with, the trade-offs that are at stake with a given new AI tool, they can help empower citizens to become active participants in public debate on AI, and thereby shift what seems politically possible. The goal is not to make everyone an AI expert: instead, the goal is to make it possible for more people to shape the story of what place AI will take in our society.
There are (at least) three things that journalists and activists can and should do at this juncture:
First, it is crucial that civil society actors keep investing forcefully in scrutinizing the expanding web of deals between tech companies and public institutions. Key procurement decisions and data-sharing agreements often escape public attention entirely. Consider Germany’s adoption of Palantir’s law enforcement software. Multiple federal states now use the tool, but under noticeably bland, innocuous names that obscure its origin: HessenData in Hesse, VeRA in Bavaria. There is not enough nuanced and well-informed public debate about whether citizens truly want a foreign company integrated into their core security infrastructure, and about whether the deployment of these tools may encroach on fundamental rights, nor is there much discussion about whether there might be viable alternatives. Journalists and activists play a key role in making these deals visible, framing the key issues in a way that is intelligible to citizens without much background knowledge, and mobilizing support for procedures that allow for democratic input before—not after—such deals are finalized.
Second, civil society has a powerful role to play in challenging the narratives that are currently driving technology policy decisions. In the EU, there is familiar anxiety about falling behind in the global race for AI innovation. My concern is that this narrative often leads to reactive, short-sighted efforts to build local competitors to OpenAI or Google, as if replicating their structure were the only viable path. But it is not obvious that that is the best and most responsible available alternative. Civil society actors can and should challenge public officials to critically explore the ‘arms race’ logic itself, and to examine whether trying to play ‘catch up’ while continuing to let other geopolitical actors determine the direction of innovation.
Third, journalists and activists are uniquely positioned to expand the range of questions that public discourse is concerned with. Whenever public debate on AI is narrowly focused on safety or global competitiveness, civil society actors can push for centering a broader perspective, including questions like: Who funds the research? Who owns the data? Who decides what counts as progress? Technical expertise is not the only expertise needed when it comes to making good collective decisions about AI. We also need the value-based input of societal actors that seek to promote the public interest, and that are able to resist efforts to depoliticize corporate decisions about AI that are framed as purely technical.
Democratizing AI means that the public has real opportunities to shape its direction, not just after tools are built and deployed, but much earlier. Political philosophers interested in the topic of democratic control often emphasize the potentially transformative role of “democratic intermediaries”—actors who do not just represent the public, but help constitute it by creating the conditions for informed, sustained, collective judgment. Activists and journalists can play this role. They need not be politically neutral, and they need not be technical experts. Their role is to ask difficult questions that no-one else is asking, and to challenge closed decision-making processes.
Promoting increased democratic control over AI is not hopelessly idealistic; it is possible and necessary. However, achieving this goal will require doing the hard work of empowering democratic intermediaries, and simultaneously repairing the bigger-picture, more general weaknesses of democratic institutions that have intensified over the past years. If we want the AI age to be aligned with values that serve not just the few but all of us, we—citizens and civil society actors supporting each other—must take control over the story of what real progress and innovation looks like.
Photocredit © Marcus Glahn