Search
Close this search box.

The Emergence of Disinformation as a Major Threat

First of a three-part series.

The warped reality of disinformation spreading online is a national security risk requiring an all-hands-on-deck approach to thwart the growing dangers. In this three-part series, experts at the University of Buffalo’s Center for Information Integrity assess the threats, the danger they pose, and a range of possible solutions.

What is disinformation and how do we distinguish it from its more capacious cousin, misinformation? Intent is key.

“Disinformation” is false or misleading information presented with the intent to deceive or manipulate an audience. Misinformation, while false or inaccurate, can be the result of human error and not an active or deliberate will to mislead.

These distinctions can help expose, defang and stop nefarious actors who are intent on defrauding individuals or groups, destabilizing our political system and seeding chaos in our civil society. But they also recognize that the spread of false and inaccurate information itself can be harmful, whether it originates from malicious sources or is repeated inadvertently by spreaders who have come across it innocently.

To put it bluntly, whether the source of information in question is a cranky uncle or a Russian bot masquerading as a political ally, when people act upon or spread false or misleading information, they become a de facto security risk.

This is not a new problem. Disinformation and misinformation have been the unfortunate byproducts of the rise of new media throughout history. The emergence of print culture in Europe in the 1500s, the “yellow journalism” of the 19th century and the arrival of radio and television in the 20th century are notable precedents. What has changed is the arrival of the internet, which is allowing a volume and speed of transmission without parallel in human history. The result is an exponential increase in false and misleading information flooding our media-saturated world. Unfortunately, the effect is pervasive enough to erode trust in our democratic institutions and weaken our ability to respond to emerging crises.

Algorithms designed to optimize user engagement and maximize profits are ruthlessly efficient. They work behind the scenes to amplify sensationalist and polarizing content that attracts more views. Whatever gets a response is rapidly spread — and bad actors exploit this vulnerability.

It is hardly an exaggeration to say that this is now a matter of national security in the United States and around the world. Simply fact checking and identifying social media accounts run by bots is not enough. This is a multi-faceted problem that requires several approaches to remedy.

We need to understand and be able to explain what drives the viral spread of dis/misinformation. More importantly, we need strategies to contain the damage and defuse a threat that is affecting about every area of public life.

This is the critical mission of the newly founded Center for Information Integrity at the University at Buffalo. Our 30-plus members recognize that tackling what may well be the defining challenge of our time requires the collective expertise of a wide range of specialists in vastly different fields, from computer science and engineering to law, medicine, the social sciences, and the arts and humanities — all working together with policymakers.

Our intention is to improve on existing red-flagging and debunking practices, which identify and debunk false information spreading online, with innovative “pre-bunking” approaches. These practices will aim to prevent inadvertent spread by raising awareness and increasing resilience among our most vulnerable populations.

For these consumers and users of complex digital products that compete for attention, this will mean learning to recognize the basic patterns and tell-tale signs of manipulative media and dis/misinformation, such as sensationalist imagery, emotionally charged language, confirmation-bias hooks, gas-lighting strategies and other forms of personal attack and questionable equivalences.

Garry Kasparov, the former world chess champion and chair of a human rights group, has captured what’s at the heart of this threat. He points out that disinformation is particularly insidious and dangerous because it damages a nation at its most fundamental level. “The point of modern propaganda isn’t only to misinform or push an agenda. It is to exhaust your critical thinking, to annihilate truth,” he says.

In subsequent commentaries for Defense Opinion, some of our leading members will discuss imminent security risks and emerging threats in critical areas of public life and share their ideas of what we can do to defuse them.

Share This Article

Facebook
Twitter
LinkedIn
Email

Also On Defense Opinion