Cyberthreats, AI-enabled disinformation loom over 2024 elections

Experts say U.S. political campaigns could be in for a rude awakening during the next election cycle as technology unlocks a wave of new digital threats.

During the 2016 presidential election, Russian state hackers breached the Democratic National Committee’s server and the private email account of Hillary Clinton’s campaign chairman John Podesta, releasing stolen messages in an unprecedented maneuver that created chaos in the American political system. Before this turning point, the cybersecurity of political campaigns was, for most candidates, an obscure and often overlooked IT problem.

But since then, the US has held three national elections during which threat actors largely failed to shape the election landscape through disseminating hacked campaign materials or other exploitation of political candidates’ digital infrastructure.

Some experts say the low profile that hacking and cybersecurity have occupied in the campaigns since 2016 is due to efforts by governments, technology companies and party organizations to redress fundamental security failures. Others say that despite the lack of publicly known incidents, campaigns are still as woefully insecure and populated by vulnerable workers as they ever were.

Critics paint a scenario of continued insecurity of the U.S. campaign ecosystem that, combined with the threat of disinformation and misinformation enhanced by artificial intelligence, could make the upcoming 2024 national elections even more problematic than those held in 2016.

Campaign security: Better than before?

Some close observers of the election security scene say that campaign cybersecurity has improved significantly since 2016, with a big assist from tech giants, the U.S. Cybersecurity and Infrastructure Security Agency (CISA), and the national parties. “There are one hundred percent improved offerings and awareness” since 2016, Mick Baccio, CISO for the Pete Buttigieg 2020 presidential campaign and the first CISO ever for any presidential campaign, told README. “I think the culture shifted after 2016.”

Baccio said he thinks a lot of “secure by default” practices are happening due to efforts such as Google’s Advanced Protection Program, which provides free Titan security keys to campaigns to help thwart phishing attacks, and Microsoft’s simplified security offerings for campaigns. CISA has also added to the default security by offering to run campaign vulnerability scans. Moreover, “the way campaigns operate now, they’re in a pretty big cloud environment, relying on AWS, Google and Microsoft,” Baccio said. “If you want to hack a campaign, you’re going to have to hack into one of them, so good luck.”

But that doesn’t mean Baccio thinks the threat has lessened since 2016. “I think we’re better defended against it,” he told README. “But that is only up to the individual campaigns.”

CISA head of Election Security and Resilience Geoff Hale said in a statement to README that campaigns and partisan organizations remain attractive targets for adversaries. “With this in mind, we offer national party committees, campaign staff and candidates many of the same services and assessments as election officials and the private sector,” he said. “As with election officials, these services are no-cost, voluntary and are offered and provided upon request and on a nonpartisan basis.”

Campaign security is a ‘freaking disaster’

Not everyone agrees with Baccio’s assessment. “I think there is a fair amount of work to do to get that ecosystem more secure,” Michael Kaiser, President and CEO of Defending Digital Campaigns (DDC), a nonprofit, nonpartisan organization that provides campaigns with cybersecurity products, services and information, told README. Kaiser’s organization assists campaigns by distributing and helping them onboard the Google security keys and software products from Microsoft, Cloudflare and other tech benefactors.

“Everybody in the political sector, whether they work on a campaign, or are a third party kind of vendor, a political organization associated with any issues or voter registration, every single one of those people we consider to be high-risk users,” Kaiser said. “And they are under attack from cybercriminals.”

“We’re deeply concerned about making sure that social media accounts have the strongest form of authentication on them so that a Facebook account can’t get hacked and then used for misinformation or disinformation or to embarrass them or to do whatever,” Kaiser said. “So, there’s concern around some of that stuff. I think it’s going to become very challenging going forward.”

 1_e2DFmBSu0cAnCzQEuxoYjg
Kelly Sikkema / Unsplash

The perception that cybersecurity is not as big a problem as it was in 2016 might simply be due to the absence of any publicized attacks on campaigns since then. “People are less concerned now than they were in 2016 because it’s not in the news,” famed cryptographer Bruce Schneier, a Lecturer in Public Policy at the Harvard Kennedy School, and a Fellow at the Berkman Klein Center for Internet & Society told README. “People are concerned about what’s in front of them, and [campaign cybersecurity as a concern] has decayed a bit. The threats are worse, but I think people are less concerned about them.”

In Schneier’s view, campaign staff are “heartbreakingly short-sighted. If you go to someone and say, here’s better security, and your staff will raise 10% fewer funds, they’ll be just more annoyed. They will tell you to get out. Nothing gets in the way of fundraising, organizing and moving. So, yeah, it’s an absolute freaking disaster out there in terms of security. And it’s not that no one cares. It’s that no one has time to care.”

Baccio told README he thinks that in a perfect world, campaigns would unite in a nonpartisan way to solve the cybersecurity problem, although he does not foresee that happening. “I think you will continue not to see cross-party collaboration from a technical level,” he said. “I think that’s a hindrance. I understand why it doesn’t happen, but I think it’s a hindrance.”

ChatGPT could create disinformation at scale

The picture surrounding the other significant threats campaigns face headed into 2024, misinformation and disinformation, is murky. “I think the biggest threat isn’t so much a technical one now,” Baccio said. “Misinformation, disinformation, seems to be a bigger threat than an actual technical attack,” particularly given the meteoric advent of OpenAI’s ChatGPT tool, which can produce compelling, articulate — and frequently inaccurate — text on any topic almost instantly.

“The technology’s getting better at a really fast pace,” Kaiser said. “I know there’s Moore’s law for silicon chips, but I don’t know if there’s an equivalent for AI development. But we can see it’s getting better and faster and more robust all the time. As always, every time there is a new generation of technology, we have to assume the bad actors are going to try and figure out how to use it.” (According to the Physics arXiv Blog at Discover magazine, no ranking comparable to Moore’s law exists for AI systems “despite deep learning techniques having led to a step change in computational performance.”)

What ChatGPT might do increasingly well is radically reduce the cost and time it takes to produce misinformation, disinformation and propaganda. Resource-constrained adversaries will no doubt turn to the AI app to expand the scope of their activities.

 1_uWHFbC4Pal-Kl7sPFt-qJQ
Jonathan Kemper / Unsplash

“Of course, ChatGPT will become a factor [in 2024 campaigns], but how much of a factor is an interesting question,” Schneier said. “What text generation gives you is speed and scale,” he said, pointing out that Russia’s Internet Research Agency (IRA) hired a hundred people and spent a million dollars per month cranking out messages, posting them and engaging with them during the 2016 campaign. (According to the Senate Intelligence Committee’s investigation into Russian activity during the 2016 campaign, the IRA had 400 employees and spent $1.2 million per month.) “That is the kind of thing that AI will help you scale. So, if that is effective, and we don’t know if it is, then yes, this would be a way to scale it.”

Jack Brewster, Enterprise Editor for NewsGuard, a company that evaluates websites, TV shows and podcasts for editorial integrity, led an experiment to show how ChatGPT could create misinformation. The NewsGuard team discovered that by directing the chatbot to respond to a series of leading prompts relating to a sampling of 100 false narratives, the tool could spew convincing new misinformation all on its own.

Brewster echoed Schneier’s belief that ChatGPT could make creating misinformation virtually costless. “This technology has the capacity to ‘democratize’ the troll farm and lower the barriers to entry for prominent misinformation,” he told README. “Before this technology was available, if I wanted to launch this information campaign, I had to hire a team of writers that could write effective copy. That is a time-consuming process that also costs money. And now I have the power of thousands of writers at my disposal if I can effectively use this technology to whatever ends.”

Foreign adversaries could particularly benefit from creating propaganda or misinformation using ChatGPT. “I think China, Russia and other countries that have used or launched disinformation campaigns in the past could easily weaponize this technology,” Brewster said. “One of the barriers to entry for them is writing effective copy in English. One of the ways that people can spot misinformation is if it’s written poorly. And suddenly, if foreign bad actors are able to weaponize this technology, they can very easily launch campaigns where they don’t need to worry about those things.”

We might not have a functioning democracy anymore

Misinformation and disinformation expert Kate Starbird, an Associate Professor in the Department of Human-Centered Design & Engineering and Director of the Emerging Capacities of Mass Participation Laboratory at the University of Washington, told README she worries about the toxic combination of election system cybersecurity vulnerabilities and ChatGPT exploitation.

“One of the threats would be a hybrid attack where someone tries to compromise election systems and in concert with that effort to actually compromise the systems runs an informational attack to exploit that,” Starbird said. In a US population that already distrusts the media due to already rampant misinformation, “that would be something that could amplify the lack of trust. This is the place where I think we are vulnerable, especially to foreign campaigns.”

 1_kmEh9uhl4I3OtL-mCCLYSw
Element5 Digital / Unsplash

Those concerns follow the findings of a Gallup poll published in October 2022, which revealed that only 7% of Americans have “a great deal” of trust and confidence in the media, and 27% have “a fair amount,” for an overall trust rating of 34%, just 2% higher than the lowest level recorded by Gallup during the 2016 presidential election.

“You’ve got a very potentially effective way of manipulating public opinion about the security of elections,” she adds. “If people lose trust in the process, they lose trust in the results. Then we don’t really have a functioning democracy anymore.”

The “first AI war”?

It is still the early days of AI text-based misinformation, and no clear solutions have emerged. “You can’t have campaigns reaching out on their own trying to fight this information with a staff of three people who are also trying to win an election,” Baccio said.

“That’s a hard question. Nobody knows,” Schneier said about fighting any potential AI-generated disinformation. “There’s already a lot of effort trying to figure out who’s behind anything. Facebook already spends considerable effort trying to take down what they call coordinated inauthentic behavior, and this would be an example of that. They have all sorts of detection mechanisms. Some of them are AI-based. So, it will be AI producing the stuff versus AI trying to discover it and take it down. This is your first AI war right here.”

Starbird sees the ChatGPT misinformation problem as an opportunity to create solutions. “I think there’s an opportunity for someone to build in this space,” she said. “Because I think eventually people will want to choose the platforms that help them get the information they need in the format they need to make good decisions.”