In the U.S. and across much of Europe disinformation is viewed in a certain predefined context. This normative paradigm of information operations typically assumes two key points: that disinformation is pushed by a foreign actor and that disinformation is a problem most significantly impacting the Global North and Western world. This is evident in the way that the American public approaches developments related to hostile information operations, with the government, media, and civil society largely preoccupied with ideas of malign Russian influence targeting Western elections and democratic institutions. That is not to say that Russian influence operations are not still a major point of concern. Russian disinformation campaigns have and will continue working to undermine Western institutions and strategic objectives. However, much of this strategic institutional attention would be better focused on other realities of modern global disinformation operations.

Today, public perception of information operations should take into account this current reality: that most disinformation originates domestically, rather than from a foreign actor; and that information operations are increasingly targeting and impacting the Global South – at times causing serious harm due to a general lack of regulatory oversight in these countries that makes addressing and responding to such operations difficult. Once we have shifted our collective understanding of international information operations to include the realities of their modern context, we will be better equipped to combat disinformation globally. To effectively fight disinformation all over the world, we must shift the paradigm around it.

The Rising Prevalence of Domestic Disinformation

Particularly in the U.S., the word “disinformation” conjures thoughts of Russian interference in Western elections. We tend to think of divisive information campaigns as primarily the work of malicious foreign state actors seeking to sow seeds of social and political discord in order to destroy the foundations of liberal democracy. And while Russia has showcased this truth in the past and continues to actively work towards undermining the U.S. and Europe through the use of coordinated disinformation operations, foreign manipulation should not be the default conception of modern information warfare. In fact, domestic disinformation is much more prevalent today. Most disinformation is created within the target country, rather than manufactured by malign external actors.

This phenomenon holds true on a global scale – for instance, disinformation campaigns have wreaked havoc across the Global South, resulting in violence during election periods, stoking anxieties around alleged criminal activity and encouraging vigilantism, and undermining public health initiatives. One of the most notable and devastating cases of domestic disinformation in recent years is that of the Myanmar military’s homegrown disinformation campaign during the ongoing Rohingya genocide. From 2017 to 2018, Myanmar military personnel created fake personas to stoke ethnic tensions on Facebook, spreading fake stories alleging that the Rohingya Muslim minority group had perpetrated abhorrent criminal acts and committed violence against the wider population to bolster support for the state’s campaign of genocide against the Rohingya.

Figure 1: The Myanmar military’s commander in chief shared these images,which claim to depict ethnic conflict in 1940s Myanmar, to his now-deleted Facebook account; they are actually images from Bangladesh’s 1971 war for independence © New York Times

In the U.S. too, where the ‘foreign actor’ paradigm of information disorder dominates public discourse around these operations, domestically created disinformation is much more prevalent than foreign disinformation – thus arguably presenting greater potential to cause significant harm. Indeed, in a 2018 study, Oxford University researchers found that 25-percent of Facebook and Twitter shares relating to the U.S. midterm elections contained deliberately deceptive or incorrect information. Of this deceptive content, most of it came from domestic U.S. sources as opposed to Russian or other foreign actors.

Ultimately, focusing entirely on foreign-sponsored disinformation campaigns can lead us to ignore those originated by domestic actors, thus failing to properly address the most prevalent disinformation campaigns that meet with the most public resonance, which may lead to real-world consequences and violence. This was recently demonstrated by the U.S. Capitol insurrection in January 2021, in which hundreds of advocates of the #StoptheSteal election conspiracy movement breached the U.S. Capitol building and engaged in violence. At least five people died shortly before, during, or after the event, while two pipe bombs were placed outside the Republican and Democratic National Committees the night before the riot. In a 2019 study of domestic disinformation, author Paul M. Barrett concludes that foreign disinformation and public preoccupation with its danger only serves to distract from operational threats in the domestic sphere. Barrett further recommends that social media platforms and other stakeholders address false or misleading information wherever it comes from – even if that means addressing false information peddled by domestic actors.

Figure 2: Protesters gather outside the U.S. Capitol building while police officers fire tear gas in an attempt to disperse the crowd © Jose Luis Magana/AP

For those still convinced that Russia and other malign foreign actors pose the largest disinformation threat, it is worth noting that most foreign malign influence relies primarily on exploitation of existing divisions within the target society. In practice, this means that Russian disinformation agents are much more likely to amplify narratives that are already in domestic circulation, given that this messaging has already proven to resonate within certain communities. During the 2020 U.S. presidential election for instance, online Russian disinformation concentrated on promoting falsehoods spread by domestic social media users, rather than on manufacturing new narratives. Logically, then, any truly effective approach to mitigating foreign malign influence online would also include addressing the source of narratives that are amplified – many of which are homegrown creations. When we become entirely preoccupied with a foreign actor paradigm of disinformation, we ignore the real and present danger presented by domestic actors.

Additionally, even when a foreign adversary creates and spreads their own narrative through a calculated disinformation campaign against a target country, the ultimate goal of these operations is to inculcate a given narrative within the minds of the domestic population so that it eventually circulates organically through local networks. A successful foreign influence campaign should be unattributable to the foreign actor that created the narrative, ultimately propagating naturally via domestic audience shares. In this case, too, the solution is the same – working to eliminate public susceptibility to false and misleading information.

The Crisis of Disinformation in the Global South

Around the world, governments are becoming increasingly concerned with the harm posed by an ever-growing volume of false information circulating online. In the U.S. and Europe, officials have already begun to develop basic legislative frameworks for addressing this problem. While these policy initiatives still fall far from providing a complete solution to digital disinformation, they do provide a basic foundation from which to approach the issue.

In the EU, for instance, current policy initiatives aim to tackle the online spread of fake news and disinformation through a multi-stakeholder approach, requiring greater responsibility from social media platforms for moderation and transparency, promoting a number of initiatives to increase public media literacy and awareness of fake news online, and funding studies to develop advanced technological solutions and identify gaps in the current legal framework. More specifically, the European Commission has established a working group committed to tackling the spread of disinformation online. Thus far, the group has a number of ongoing initiatives, including the development of a worldwide Code of Practice on Disinformation for online platforms, the creation of the European Digital Media Observatory as a hub for fact-checkers, academics, and other stakeholders to support relevant policy and decision-making, the development of monitoring and reporting tools and programs, and more.

The current prevailing paradigm for countering disinformation, however, automatically derives from the national viewpoints of those countries that have already developed a framework to combat fake news. Furthermore, many of the digital tools and platforms where disinformation circulates (Facebook, Twitter, Instagram, WhatsApp, Gab, Parler, etc.) are owned by American, European, and Chinese companies – meaning that efforts by these platforms to maintain information integrity are also largely defined by these national perspectives and languages. As a result, these viewpoints fail to take into account the larger state of global information operations, and as such cannot properly respond to them.

While we tend to think of disinformation as primarily targeting and affecting the Global North—especially in the U.S. and Europe—the Global South has seen a boom in disruptive information operations, some of which have caused significant real-world harm. In 2018, the world reached an internet penetration breakthrough, with more than half of the world’s population finally able to access the internet. Internet accessibility increased at the most rapid pace in Africa, which rose from 2.1-percent of the continent with internet access in 2005 to almost 25-percent in 2018. Online accessibility across parts of southeast Asia also saw impressive progress.

As has also been true globally, with the advent of the internet and social media, Global South countries have seen a boom in the circulation of disinformation, with such digital campaigns now able to reach more people at a faster rate and with less effort on the part of malicious actors. This was demonstrated by the Myanmar military’s calculated online disinformation campaign during the Rohingya genocide, but it has also presented a more general growing problem across the Global South. For instance, from 2017 to 2018 rumors about alleged child abductions and organ harvesting in India circulated widely via WhatsApp and Facebook. Narratives alleged that strangers were speeding through villages and kidnapping children for nefarious purposes, and employed use of text descriptions as well as video clips taken out of context. These stories sparked so much fear and outrage that they culminated in several mob lynchings over the course of two years. In Africa, a multitude of elections occur across the continent every year, some of which are marred by intense political and social tensions that can escalate to violence. The risk of such tension culminating in violence rises when political opponents spread disinformation about rivals or spread their own false narratives to promote themselves and stifle their competition.

Figure 3: Locals protest a spate of vigilante mob attacks in Ahmadabad, India © Ajit Solanki/AP

Additionally, the presence of information voids in pockets of the Global South have compounded the issue of false information online. Citizens in certain jurisdictions may search for information online regarding a given topic, but search engine biases often result in sub-par results for culturally or regionally specific queries. For example, researchers examining public health queries in Africa from 2016 to 2017 found that when people searched for information about alternative health cures—such as treating AIDS with blackseed oil or coconut—these queries often returned results from questionable blogs or untrustworthy sites promoting alternative medicines rather than reliable online sources of public health information. Researchers attributed this to search engine and source bias, given that official government and other authoritative public health sites with the highest volume of online traffic were unlikely to actively mention and debunk these fraudulent alternative health cures.

Further exacerbating this problem in the Global South is the lack of a regulatory framework in many countries to deal with the problem of disinformation. While some countries have introduced basic legislative initiatives to tackle this problem, these have also been controversial at times, as seen in the case of fake news laws proposed and/or implemented in South Korea, Singapore, and across Africa, just to name a few. While countries implementing fake news laws often cite very real problems related to disinformation, free speech advocates and human rights activists point to valid concerns that such laws can be used by oppressive governments to restrict free speech and to excessively monitor communications.

Addressing the epidemic of disinformation globally requires sound understanding of the full scope of these operations, which includes shifting the perception of Global North governments that disinformation is an issue primarily impacting their jurisdictions. With increasing online access worldwide, information campaigns have taken hold everywhere – and their impact can be just as, if not more, devastating in Global South countries with a relative lack of digital regulatory oversight.

Jessica Terry, Disinformation Analyst
About Blackbird.AI
Blackbird.AI helps organizations detect and respond to threats that cause reputational and financial harm. Powered by their AI-Driven Narrative & Risk Intelligence Constellation Platform, organizations can proactively understand risks and threats to their reputation in real-time. Blackbird.AI was founded by a team of experts from artificial intelligence, and national security, with a mission to defend authenticity and fight narrative manipulation. Recognized by Forrester as a "Top Threat Intelligence Company," Blackbird.AI's technology is used by many of the world's largest organizations for strategic decision making


While all these recommendations seem to be sound, the likelihood that these measures can be agreed upon and implemented are becoming increasingly less likely in the U.S. and around the world. In fact, we have been moving in the opposite direction. Platforms have begun to roll back access for research communities, decrease moderation around misinformation, or strike down moderation altogether in the name of freedom of expression. The very notion of banning a popular platform in the U.S. would have seemed unthinkable a few short years ago, with organizations like the ACLU strongly voicing that a ban on TikTok would violate the First Amendment.

- asdf