University of Buffalo Center is Developing Tools to Thwart ‘Deep Fakes,’ Other Disinformation


Third in a three-part series.

The warped reality of disinformation spreading online is a national security risk requiring an all-hands-on-deck approach to thwart the growing dangers. In this three-part series, experts at the University at Buffalo’s Center for Information Integrity assess the threats, the danger they pose and a range of possible solutions.

The world is experiencing a digital metamorphosis — nearly everything we express, buy, rent, believe or decide on starts with digital content and remains forever on the Internet. But humans are not the only inhabitants of digital spaces.

We increasingly share the new frontier with ever more capable artificial intelligence entities, conversational agents and synthetic media. The advance of powerful artificial intelligence (AI) technologies has enabled many feats unimaginable only a decade ago.

Yet, it also opens Pandora’s box of misunderstanding and abuse. Synthetic media (including text, image, audio and video) generated or manipulated automatically using AI algorithms — commonly known as “deep fakes”— have started challenging our long-held belief that “seeing is believing.”

Combating the erosion of authenticity

The proliferation of synthetic media also erodes the value of authenticity. It raises doubts about our notion of truth and reality while facilitating the spread of online disinformation and misinformation and undermining trust in our nation’s democratic institutions.

As consumers of information, we all must learn how to operate in a world teeming with synthetic media. We need to cope with the consequences of anyone, without any prerequisites or skills, being able to create convincing fake digital content in seconds.

The first line of defense comes from the technology that can expose synthetic media.

For instance, technology companies and researchers are developing practical algorithms and systems that can expose fakes, raising the bar on the quality, time and skill needed for their production. Detection, however, will never be enough because the algorithms used to create deep fakes are always improving.

Researchers are also working on ways to prevent the creation of unauthorized synthetic media with new digital tools that will allow legitimate uses—such as videos of celebrities speaking foreign languages with permission—but prevent bad actors from using someone’s image or voice to create fraudulent, embarrassing and potentially incriminating synthetic media.

Combating disinformation also entails an elevated level of awareness and resilience on the part of consumers. Everyone, including businesses, institutions and casual users, needs to be better prepared for a social media environment where certain kinds of manipulative media cannot be differentiated from authentic and legitimate content.

It can be tempting to take skepticism to extremes and distrust all content, and that is another by-product of synthetic media, known as the “liar’s dividend” — bad actors can undermine real media messages by claiming they are deep fakes. We must recognize the complex nature of disinformation and the limited reach of purely technical fixes.

Using ‘chatbots’ to thwart the spread of disinformation

A practical solution must bring together experts from computer science with researchers in social and behavioral sciences, the humanities and user communities to develop strategies that “inoculate” users and empower them to protect themselves.

One such cross-disciplinary approach is the use of technology, specifically conversational artificial intelligence, known popularly as social bots or chatbots. Chatbots are AI tools that are able to understand human conversation and respond appropriately given the context and application. Companies already are deploying chatbots extensively in e-commerce to assist customers.

The recent Alexa Prize competition sponsored by Amazon spurred development of social bots that can engage in prolonged conversations on any topic, paving the way for use in combating disinformation.

An initial application involves using chatbots to test the resilience of users to online deception schemes. Additionally, as their conversational abilities continue to mature, chatbots can be deployed in live social media environments. They can act as trustworthy assistants to users and flag potential disinformation, as well as suggest responses when confronted with hate speech. They can eventually be trained to participate in online conversations in an attempt to persuade users to consider additional facts before propagating disinformation.

In short, once deployed by social media platforms, these specialized AI chatbots could result in the mitigation of disinformation and other forms of online deception.

The nation’s institutions of democracy are only as good as the trust people place in them. Researchers at the University of Buffalo Center for Information Integrity are at the forefront of efforts like specialized AI chatbots to thwart the spread of disinformation undermining the public trust.

 

 

Share This Article

Facebook
Twitter
LinkedIn
Email

Also On Defense Opinion