News & Insights | Posted July 10, 2023
Artificial Truths? AI and the Future of Politics
Could AI be the next thing to further toxify our politics or is it a great opportunity to improve the level of debate? Liam Smith takes a look underneath the AI bonnet and considers what it may all mean.
Joe Biden, Donald Trump and Barack Obama recently engaged in a heated debate.
Their point of contention? Which choice of drink was superior.
Obama, defending water, accused soda-loving Trump of being a sore loser and a calorie addict, while Biden happily declared his love for coffee.
Yes, you read that right. You can listen to it here, even though it never actually happened. It was voiced convincingly by AI in a parody video.
Does using AI in politics spell democracy’s doomsday?
Prior to AI, misinformation or “fake news” was already a threat, with lies about President Obama’s birth certificate and a manipulated video of former US House Speaker Nancy Pelosi appearing drunk on stage going viral in 2019.
For as long as there’s been politics, misinformation has been weaponised – something AI could catalyse.
In fact, it’s already happening.
A ‘deepfake’ video of President Biden ordering a military draft was shared on Twitter earlier this year. In June, Presidential candidate Ron DeSantis used deepfaked and real images together in an anti-Trump attack ad, while anti-Kremlin forces in Belgorod broadcast a deepfaked national address of President Putin ordering an evacuation.
Deepfakes are becoming more common and sophisticated by the day. AI has already distorted truth to the point that a video of President Biden last year was accused of being a deepfake. It was a real address.
If a convincing deepfake surfaces on the eve of an election, could it sway enough voters to affect the election’s outcome before it can be effectively debunked? And in any case, these “debunks” could themselves be dismissed as “fake news”. The worry must be that hostile actors could easily interfere in elections with no more than an AI tool and the click of a button.
Could AI be the ultimate campaign tool?
So, is it time to hunker down and prepare for our democracy’s Armageddon?
Perhaps AI could, instead, revitalise democracy.
Communication is key in any political campaign. If a campaign’s policies cannot be clearly conveyed to voters, it is bound to fail. Ask OpenAI’s ChatGPT to explain quantum physics in layman’s terms and it can do so with ease.
This technology could transform party platforms struggling to translate complex policy to the electorate. AI could make politics more accessible to voters and campaigning more purposeful for politicians. AI could, in effect, democratise our democracy.
AI could help formulate campaign strategy, too. AI could analyse social media and polling trends to identify key voter issues and campaign weaknesses, leading to strategic recalculations. Such analysis, especially where polls alone prove unreliable, could be vital to a successful campaign.
This year the US Democratic National Committee tested AI-produced content engagement and fundraising capabilities and found AI-manufactured work either equalled or outperformed human-made equivalents.
With no current rush to implement it, it is unlikely AI will play any major role in the campaign strategies of 2024, in either the UK or US. However, its strategic potential is unlikely to be overlooked for long.
How to spot a deepfake
Spotting deepfakes is still a major problem, but it might be possible to fight fire with fire.
Photoshop recently announced their software will be able to detect ‘Photoshopped’ images. What if AI could do the same?
Working with social media firms, AI could detect and label deepfakes the moment they are posted, cutting off the danger of misinformation at the source, blunting the bite of its electoral impact, all without need for censorship. Who better to recognise AI deepfakes in the half-second upload time than AI itself?
Rather than be the boogeyman of democracy, could AI prove to be its champion?
Regulating a revolution
How do we minimise the risks of AI while bolstering its positives? There is only one answer: Regulation.
Key to regulation will be balance. In its AI policy report updated last August, the Scottish government focused solely on AI’s positives without mentioning the risks.
Regulators are struggling to keep up, but slowing down progress is not an option.
Election-interfering states keen on spreading misinformation would benefit from a head-start in AI if we in the West decided to pause our advancements.
We require regulation yes, but continuing AI development is key to, ironically, countering the threats of further AI development. We cannot put the AI genie back in the bottle. No matter how much some may wish, you cannot un-invent the invented. Just ask Oppenheimer.
A balance between strong regulation and continued development is essential – a balance the UK government and the larger global community seems eager to strike. The G7’s Hiroshima statement called for regulation, specifically addressing generative AI and the danger of misinformation. Prime Minister Rishi Sunak and President Biden discussed the issue a few weeks ago, leading to the Atlantic Declaration. The UK is set to host the world’s first AI summit later this year. Sunak has stated he wants the UK to be a world leader in developing “responsible AI”.
These efforts may not be enough, however. This week, while the UK government passed anti-deepfake legislation, it solely concerns the issue of deepfake pornography, not misinformation, and many policy papers lack focus on misinformation at all.
For regulation to be effective, it must come sooner rather than later, and its focus – if AI is to benefit and not destabilise our politics – must do more to address these key issues.
We cannot afford to wait.
Public affairs intern at 56 Degrees North