AI Will Accelerate Direct Democracy—and Destroy It
By David Altman
Journal of Democracy, 2026
David Altman, the world’s leading political scientist on direct democracy, turns his attention to AI in this new paper—and finds a paradox.
“Generative AI will act as a powerful accelerator for direct democracy, but in doing so, it will systematically undermine the civic foundations that make these mechanisms legitimate and stable,” he writes.
Altman, a Uruguayan who is on the faculty at the Catholic University in Santiago Chile, explains that AI will make it cheaper and easier for people to write and quality citizen-initiated measures of direct democracy—like initiatives and referenda. But by producing so many more direct democracy measures. AI will “flood the public square” thus undermining the deliberation and public decision-making that is the chief virtue of direct democracy.
In the end, he predicts, AI could well “create a system that is more plebiscitary than deliberative, more efficient than legitimate, and ultimately, more destabilizing than stabilizing."
Altman, whose 2011 book Direct Democracy Worldwide remains a foundational text, is a creative sort. And here he leans on fiction techniques to describe two visions of what direct democracy might look like with AI.
The nightmare, which he dubs Automated Plebiscites, looks like this:
In the spring of 2029, a major European nation finds its public sphere saturated by the hashtag #TakeBackOurCountry. The posts are compelling, personalized, and eerily authentic. They do not look like political ads; they look like heartfelt messages from neighbors, shared by friends. Behind them lies a sophisticated AI model, financed by opaque networks, that has identified and exploited a deep vein of public anxiety over a slowing economy and rising immigration—an illustration of how AI systems can target and amplify social grievances, as research on AI-driven persuasion and coordinated swarms has suggested (Kreps and Kriner 2023; Ferrara 2024; Schroeder et al. 2025).
Within weeks, the AI has not only shaped public opinion but has channeled it into direct action. It drafts the legally airtight text of the “National Sovereignty Restoration Act,” a citizen’s initiative proposing the mass deportation of undocumented migrants and a near-total moratorium on new asylum claims. It then orchestrates a flawless signature-gathering campaign, micro-targeting sympathetic voters and guiding volunteers with real-time optimization. What once took activists years is now accomplished in days.
The initiative qualifies for the ballot. A democratic tool, designed to give people a voice, has been weaponized. The ensuing campaign is a nightmare of clarity. Proponents deploy an army of AI-generated personas—concerned mothers, retired soldiers, out-of-work factory debate in flawless local dialects across thousands of parallel online forums, creating an overwhelming illusion of popular consensus. Deepfake audio of a political leader supposedly endorsing the proposal circulates widely before it can be debunked, echoing earlier warnings about the destabilizing potential of synthetic media and “deepfakes” to. erode trust and overwhelm fact-checking capacities (Chesney and Citron 2018; Vaccari and Chadwick 2020; Meaker 2023) . The opposition—human rights groups, mainstream parties, and a weakened civil society—is drowned out. They are fighting a hydra of synthetic persuasion.
On election day, the measure passes with 58% of the vote. The system worked exactly as its designers intended: citizens initiated a law, and citizens voted on it. But was this a triumph of direct democracy, or its catastrophic failure? The result plunges the country into a constitutional crisis, triggers international condemnation, and sparks violent street protests.
Altman does offer an alternative, “a preferable path of Augmented Deliberation,” and a roadmap that includes digital guardrails “to harness AI’s power to strengthen rather than hollow out democratic practice.” These guardrails include a cooling-off period to slow down AI direct democracy, a watermark to disclose AI’s use in political documents, and “public interest AI armories.”
Democracy Local will leave you to read about those possibilities in the full piece here.



