Physical Address

304 North Cardinal St.
Dorchester Center, MA 02124

Analysis: In the age of AI, keep calm and vote on

This article is part of a series, Bots and ballots: How artificial intelligence is reshaping elections worldwide, presented by Luminate.
When I started this series on artificial intelligence, disinformation and global elections, I had a pretty clear picture in mind.
It came down to this: While AI had garnered people’s imagination — and the likes of deepfakes and other AI-generated falsehoods were starting to bubble to the surface — the technology did not yet represent a step change in how politically motivated lies, often spread via social media, would alter the mega-election cycle engulfing the world in 2024.
Now, after nine stories and reporting trips from Chișinău to Seattle, I haven’t seen anything that would alter that initial view. But things, as always, are more complicated — and more volatile — than I first believed.
What’s clear, based on more than 100  interviews with policymakers, government officials, tech executives and civil society groups, is that the technology — specifically, generative AI — is getting more advanced by the day.
During the course of my reporting, I was shown deepfake videos, purportedly portraying global leaders like U.S. President Joe Biden and his French counterpart Emmanuel Macron, that were indistinguishable from the real thing. They included politicians allegedly speaking in multiple languages and saying things that, if true, would have ended their careers.
They were so lifelike that it would take a lot to convince anyone without deep technical expertise that an algorithm had created them.
Despite being a tech reporter, I’m not a fanboy of technology. But the speed of AI advancements, and their ease of use by those with little, if any, computer science background, should give us all pause for concern.
The second key theme that surprised me from this series was how much oversight had been outsourced to companies — many of which were the same firms that created the AI systems that could be used for harm.
More than 25 tech giants have now signed up to the so-called AI Election Accords, voluntary commitments from companies including Microsoft, ByteDance and Alphabet to do what they can to protect global elections from the threat posed by AI.
Given the track record of many of these firms in protecting users from existing harms, including harassment and bullying on social media, it’s a massive leap of faith to rely on them to safeguard election integrity.
That’s despite the legitimate goodwill I perceived from multiple interviews with corporate executives within these firms to reduce politically motivated harm as much as possible.
The problem, as of mid-2024, is that governments, regulators and other branches of the state are just not prepared for the potential threat — and it does remain potential — tied to AI.
Much of the technical expertise resides deep within companies. Legislative efforts, including the European Union’s recently passed Artificial Intelligence Act, are, at best, works in progress. The near total lack of oversight of how social media platforms’ AI-powered algorithms operate makes it impossible to rely on anyone other than tech giants themselves to police how these systems determine what people see online.
With AI advancing faster than you can say “large language model” and governments struggling to keep up, why am I still cautious about heralding this as the year of AI-fueled disinformation, just as billions of people head to the polls in 2024?
For now, I have a potentially naive belief that people are smarter than many of us think they are.
As easy as it is to think that one well-placed AI deepfake on social media may change the minds of unsuspecting voters, that’s not how people make their political choices. Entrenched views on specific lawmakers or parties make it difficult to shift people’s opinions. The fact that AI-fueled forgeries must be viewed in a wider context — alongside other social media posts, discussions with family members and interactions with legacy media — also hamstring the ability for such lies to break through.
Where I believe we’re heading, though, is a “post-post-truth” era, where people will think everything, and I mean everything, is made up, especially online. Think “fake news,” but turned up to 11, where not even the most seemingly authentic content can be presumed to be 100 percent true.
We’re already seeing examples of politicians claiming that damaging social media posts are deepfakes when, in fact, they are legitimate. With the hysteria around AI often outpacing what the technology can currently do — despite daily advances — there’s now a widespread willingness to believe all content can be created via AI, even when it can’t. 
In such a world, it’s only rational to not have faith in anything.
The positive is that we’re not there yet. If the nine articles in this “Bots and Ballots” series show anything, it’s that, yes, AI-fueled disinformation is upon us. But no, it’s not an existential threat, and it must be viewed as part of a wider world of ‘old-school’ campaigning and, in some cases, foreign interference and cyberattacks. AI is an agnostic tool, to be wielded for good or ill.
Will that change in the years to come? Potentially. But for this year’s election cycle, your best bet is to remain vigilant, without getting caught up in the hype-train that artificial intelligence has become.
Mark Scott is POLITICO’s chief technology correspondent. He writes a weekly newsletter, Digital Bridge, about the global intersection of technology and politics. 
This article is part of a series, Bots and ballots: How artificial intelligence is reshaping elections worldwide, presented by Luminate. The article is produced with full editorial independence by POLITICO reporters and editors. Learn more about editorial content presented by outside advertisers.

en_USEnglish