There are dozens of elections in 2024 and little sense of how generative AI will impact them from context to context. This session explored the questions funders should be considering: In what ways will democratic governance be impacted by AI? What have we learned about the role of AI in the electoral process globally in the first half of this historic election year? What role will AI play in campaigning, election administration, and the information environments during and after elections? How will the impact of AI on elections change the material conditions for communities around the world?
Speakers included:
Prathm Juneja: PhD candidate in social data science at the OII
Itika Sharma: Rest of World
Raquel Vazquez LLorente: WITNESS
Julia Rhodes Davis (facilitator): Computer Says Maybe
Watch the recording
Password: AI
What kinds of actions can we take when AI’s impact on elections & democracy is still uncertain?
One pertinent observation from this panel was that 2024 is a unique year in the sociotechnical space. There are over 60 elections happening worldwide in the very same year generative AI has reached a key level of maturity: tools and models are more accessible than they’ve ever been, and are routinely repackaged and distributed for wide consumer use — but governments and policymakers are totally unprepared for this.
Being ‘prepared’ — whatever that may look like — is also a fiendish challenge, given it’s hard to know what the impact of ubiquitous generative AI tools will be on elections at this early stage. All we can really know is that there will be an impact, and it may take years to see any evidence of it.
These challenges around assessing impact make it hard to take a systemic view. Therefore, it’s important to focus on immediate interventions for the most urgent problems, such as deepfakes and misinformation. These problems are not new, but are coming at greater volumes, and often packaged as lighthearted viral entertainment: there are AI-generated videos of deceased world leaders endorsing present-day candidates; recreated songs and movie clips from Bollywood; and endless memes. It can be impossible to separate what’s real from what isn’t.
Local journalists working on election coverage, who are already overburdened with handling mis/disinformation, now have to prove that their own content is real — all while navigating what is referred to as the liar’s dividend, which is a dynamic in which those spreading misinformation, or those looking to skirt reputational damage, benefit from an information ecosystem which is heavy with false information and synthesised media. Real videos of politicians embarrassing themselves can easily be marked as fake by those same politicians, because the presence of AI — even if not used to synthesise or alter content — cultivates a level of skepticism whereby nearly everything could be seen as ‘fake’.
There are many tools out there that can help with detecting AI-generated content, but not all of them are necessarily accessible to the communities that need them. Elections happening outside of a Western context won’t be covered in depth by mainstream media outlets (such as the New York Times or similar), and so information will likely flow organically, and through private channels, such as WhatsApp groups. This makes it hard to track, and hard to authenticate.
Some of the work Raquel is doing with WITNESS is to bridge the gap between the computational detection tools and methods out there, and the people on the ground that need them to do their work. This still creates a fair amount of friction at a time were reporting needs to move fast; Itika pointed out that if any of her staff want to report on a piece of media, they first have to go through three or four researchers to verify whether or not the content is real. The popularity of AI tools has not only created an erosion of trust, but also a new layer of technical debt for journalists to work through.
The panel discussed a range of interventions, many of them aimed at tackling mis/disinformation — here are three that would benefit from consideration in the immediate term:
Both identifying what is fake, but protecting what is true: the bottomless pit of content that’s coming out of election discourse is making fact-checkers question their judgement; a lot of this is down to not knowing what AI is even capable of — and it’s important to remember that just because something is AI generated, it is not somehow more harmful, more malicious, or ‘worse’. Supporting the provision of tools and education for journalists and fact-checkers is key.
Connecting digital infrastructure providers with electoral commissions: generative AI companies such as OpenAI have made statements about how their tools will respond to queries on the US election — but this is not the case in other countries. Space needs to be made for the companies building and providing models, and electoral commissions, to get in a room together to discuss how these tools will be received in their respective nations.
Addressing the human supply-chain that sits behind these AI systems: it’s hard for reporters to understand what’s going on with generative AI companies because the industry is so opaque; there’s no clear mechanism to find a gig-worker and ask them how things work behind the scenes — again, mentoring for independent journalists is needed. You can also read more about the AI supply chain in the May recap.