Recent Election Incident Highlights Need for Tools that Counter Deepfakes

In the run up to the recent New Hampshire presidential primary, voters in the state were bombarded with what law enforcement officials suspect was an artificial intelligence-generated robocall of President Biden with the goal of suppressing the vote.

“What a bunch of malarkey,” the recorded message from “Biden” begins, according to NBC News, telling recipients to stay home and “save your vote for the November election.”

The robocall is a glimpse of a new era dawning in which AI is employed by bad actors as a weapon against essential institutions and functions of democracy, including elections.

As the incident underscores, recent advances in artificial intelligence technologies are prompting a growing number of experts to raise the alarm. They have expressed concerns about the capabilities of so-called generative AI to lock us inside blinding illusions that erode democracy.

Generative AI — such as ChapGPT, Bard, PaLM, ImageGPT, DALL-E, PrimeVoiceAI — can produce highly realistic text, voices and images (and not too far in the future, videos), so realistic that even experts have difficulty determining their origin and authenticity.

In the current media ecosystem, the increased risk of exposure to highly realistic deceptions known as deepfakes is especially worrisome. “Deepfake” is a portmanteau of deep learning and fake media, and the namesake refers to multimedia (texts, audio, images, and videos) created using generative AI that rely on deep neural network models.

The fabrication and manipulation of digital media content is not a new phenomenon, but the astronomical sophistication of generative AI technologies has made it easier and cheaper to create good quality deepfakes in large quantities.

With social media’s rapid and broad reach, the potential impact of these fakes is far reaching. A joint multi-agency and industry report led by the Department of Homeland Security warned that deepfakes “pose a threat for individuals and industries, including potential largescale impacts to nations, governments, businesses and society, such as social media disinformation campaigns operated at scale by well-funded nation state actors.”

“We expect an emerging threat landscape wherein the attacks will become easier and more successful, and the efforts to counter and mitigate these threats will need orchestration and collaboration by governments, industry and society,” the report concluded back in 2021.

That emerging threat landscape has arrived.

Falling into the wrong hands, deepfakes can pose significant threats to personal security and the information integrity of entire communities. This ranges from the viral image of Pope Francis in a puffer jacket to a recent video of Taylor Swift selling beauty products or fake explicit images of Swift — to more serious incidents, such as an AI-generated image depicting a Pentagon explosion that caused fluctuations in the S&P 500 index.

Convincing and realistic deepfakes pose a threat to democracies around the world. With elections looming for half of the Earth’s population, including the United States, we must recognize the threat and do everything we can to raise awareness and mitigate risks.

An approach for combating deepfakery

Users and consumers are the first line of defense against deepfakes. They need to be educated about how to identify potential fakes. More importantly, they should learn about which sources to trust, asking themselves why a website might be publishing a particular image or video or audio. Users and consumers should also be supported by the widespread deployment of detection tools and mitigation methods.

For instance, at the University of Buffalo we have developed learning tools in the form of mobile games, to help Americans 65 years and older understand the myriad of new forms of scams and disinformation fueled by the new generative AI technology.

We also provide an open platform that bundles state-of-the-art deepfake detection methods for journalists, law enforcement investigators and the general public for deepfake detection, attribution and provenance.

Social media companies need to take a more active role. They should provide capacity for the users to red-flag suspicious content and inform their social media circles. Legislation and regulations can provide an assist. As one example, the White House issued an executive order last October to protect Americans from AI-enabled fraud and deception “by establishing standards and best practices for detecting AI-generated content and authenticating official content.” As part of that broad effort, the Department of Commerce is developing guidance for content authentication and watermarking to clearly label AI-generated content.

Additional investments are needed for detection tools and educational initiatives to counter the misuse of AI, including deepfakes. The current level of investment in these areas is insufficient to keep up with the pace of development and proliferation of generative AI.

Most importantly, we need market reforms to change the incentive structure of our media economy to discourage misuse of this powerful new technologies and prioritize the public good. Social media companies continue to profit from sensationalist, emotionally charged, divisive and polarizing content, which often includes misleading and inauthentic images and audio. While they are not in the business of creating content, their algorithms are trained to prioritize the kind of eye-popping content that attracts more views, regardless of its reliability or authenticity.

These companies are unlikely to make meaningful adjustments to their business practices without the right combination of incentives and disincentives. This need is even more urgent now, as the arrival of generative AI has made it easier for anyone to create realistic fakes, requiring little or no training.

Democracies everywhere are at risk unless we figure out how to promote and reward proactive action in the vital area of information integrity and discourage not just individual misbehavior but self-interested corporate inaction.

 

Share This Article

Facebook
Twitter
LinkedIn
Email

Also On Defense Opinion