Distinguishing between what is real and what is fake is getting more and more difficult. Even more disconcerting is the gray area sandwiched between where information is intentionally altered, taken out of context, or twisted to support a particular narrative. Over-time these misconceptions have become embedded in both our political and non-political discourse, woven into our basic assumptions and beliefs. When this happens, it drives poor decision-making, and in the extreme, dangerous rhetoric that dehumanizes and encourages violence against people who do not agree with a particular point of view. Whether it’s viral memes suggesting that police officers are violent, or media outlets that portray peaceful protests as being carried out by “violent criminals.” Disinformation and misinformation, in extreme scenarios fed numerous genocides that took place during the 20th century, whether religious, ethnically, or politically driven.

Different studies and researchers have given differing definitions for disinformation and misinformation, although most fall into these general definitions:

Disinformation – when false information is spread with the intent to cause harm.

Misinformation – when disinformation is spread unintentionally.

Altogether, continued exposure to disinformation affects consequential decisions, from split-second battlefield decisions to major policy initiatives fed by hyper-partisan narratives. But it also feeds everyday decisions, like who we choose to hire, whether we interpret a person’s actions as a threat based on the way they look, or some aspect of their character, and how we respond to that threat.  It even impacts our purchasing decisions in the grocery store and the medicines (and “solutions”) we embrace for viruses like COVID-19.  When our decision-making is skewed by misinformation or disinformation, we are making decisions based on false truths.  That, in and of itself, can be a very dangerous – or even life-threatening proposition.

What is often referred to as “fake news” is simply disinformation, as a UNESCO handbook for journalists points out. This disinformation is also typically intended to influence political views. Part of what makes fake news so damaging is that it is disinformation meant to control public opinion and it does this by controlling the beliefs of its individual consumers. In other words, its intention is not to inform the public but to manipulate it.

The openness and timeliness of social media have largely facilitated the creation and dissemination of misinformation, such as: rumors, spam, and fake news. As witnessed in recent incidents of dis- and misinformation causing real-life harm, the ability to detect false information has become an important problem at every level – from the individual to the corporation — to governments. It is reported that over two-thirds of adults in the U.S. read news from social media, with 20% doing so more than once per day. Online disinformation has been around since the mid-1990s but was rarely discussed. However, in 2016, several events made it broadly clear that darker forces had emerged: automation, microtargeting, and coordination were fueling information campaigns designed to manipulate public opinion at scale

During the past four years, the discussion around the causes of our polluted information ecosystem has focused almost entirely on actions taken, (or not taken), by technology companies. This analysis is, however, far too simplistic. A complex web of societal shifts is making people more susceptible to misinformation and disinformation. Trust in institutions is falling because of political, health, and economic upheaval, most notably through ever-widening income inequality. The effects of climate change are becoming more pronounced. Global migration trends spark concern that communities will change irrevocably. The rise of automation makes people fear for their jobs and their privacy.  And if this wasn’t enough, disinformation targets the very framework of our daily lives – from what we eat to the bottles we drink from to the beds we sleep on and everything in-between.

Bad actors who want to deepen existing tensions understand these societal trends, designing content that they hope will so anger or excite targeted users that the audience will become the messenger.

The goal of a disinformation threat actor is to have the reader use their own social capital to reinforce and give credibility to the original message and  These methods are very effective because similar systems have targeted our emotions to sell, change, or impact our purchasing decisions for some time now.

It is important to note that, disinformation is not always designed to persuade people in a particular direction, but to cause confusion, to overwhelm and to undermine trust in democratic institutions ranging from the electoral system to the journalists covering it themselves, and, although much is being made about preparing the U.S. electorate for the 2020 election, misleading and conspiratorial content did not begin with the 2016 presidential race, and it will not end after the next one. As tools designed to manipulate and amplify content become cheaper and more accessible, whether bot-driven or not, it will be even easier to weaponize users as unwitting agents of disinformation.

Understanding how each person is subject to disinformation campaigns, and might unwittingly participate in them, is a crucial first step to fighting back against those who seek to upend a sense of shared reality. Perhaps most important is accepting how vulnerable our society is to manufactured amplification and analysis needs to be done sensibly and calmly. Fear Mongering will only fuel more conspiracy and continue to drive down trust in quality-information sources and institutions of democracy until there are no more institutions left to erode.

[gs-fb-comments]