DATA TIMELINE: FEBRUARY 2-14, 2020
REPORT V1.1 PUBLISHED FEBRUARY 19, 2020
BLACKBIRD.AI – INTELLIGENCE REPORT
Blackbird.AI uses machine learning, network analysis and propagation patterns to identify and combat disinformation. Our proprietary AI-Driven SaaS platform provides a sophisticated understanding of both manipulation and harmful digital media. The Blackbird Platform surfaces synthetic amplification from bots and bot-nets, delegitimization campaigns, and meme-driven propaganda.
Through our methodology we have quantified the total volume of tweets originating inorganically, which is a prototypical indicator of a manipulation campaign.
Manipulation leads to harmful inorganic content, defined as any content that can be used to drive harmful narratives.
Disinformation is defined as false information intended to mislead; however, in today’s culture it has become a catch-all term for any manipulative or harmful informationspreading activities. Synthetically amplified manipulation campaigns blend fact and fiction to intentionally create a deleterious effect on their subject.
BLACKBIRD MANIPULATION INDEX
Our Blackbird Manipulation Index (BBMI) quantifies the total percentage of inorganic posts with intent to manipulate. We provide a numerical index showing the degree of manipulation across the information ecosystem.
The COVID-19 Coronavirus has caused unprecedented levels of health concerns across the globe. To date, the economic impact has also been monumental, totalling approximately $90B with no end in sight. Manipulative actors of all types have capitalized on this global health crisis, mobilizing with the goal of spreading fear and panic across social media, tapping into every possible facet of social and economic narratives. ignificant ongoing campaigns range from healthcare and religious beliefs to economic impacts, xenophobia and scare tactics. While there have been outbreaks like SARS and Swine Flu in years past, never has the information ecosystem been so poised to generate such an enormous wealth of inorganic content.
According to McKinsey’s Covid-19 Crisis Response Report for company decision makers, released on February 14th, disinformation is one of the exacerbating socio-economic factors that will define the overall industry impact of COVID-19. The report clearly states that the severity of potential long-term scenarios, ranging from “quick recovery” to “global slowdown” depends on “what you believe”.
Modern disinformation campaigns primarily focus on the modification of perceptions and beliefs. It is critical to note that in today’s information ecosystem, public outcry and concern can be manufactured, and the people whose decisions these campaigns can influence extend to policy makers, government officials, and even entire nation-states. It is critical that everyone realize both the scope and scale of disinformation attacks within the COVID-19 crisis.
Blackbird.AI rapidly identified, through AI-driven insights, significant amounts of inorganic content across social media. We also detected anomalies in techniques being used to shift public perception via the manipulation of narratives.
Our platform surfaced the relationship within social media campaigns and the percentage of inorganic content that they contained. Our system revealed a variety of actors that introduced unnecessary uncertainty into the global information ecosystem. These actors included people, outlets, and entire networks of coordinated, inorganic amplifiers and bot networks.
STAGGERING LEVELS OF INORGANIC CONTENT
Blackbird.AI’s COVID-19 Disinformation Report analyzed 6,923,257 tweets from 2,611,168 unique users on topics related to COVID-19 for the period of February 2-14, 2020. We found approximately 2.67 million tweets to be inorganic, leading to a Blackbird Manipulation Index (BBMI) of 38.7%.
The green region shows organic conversations and the red region indicates the inorganically amplified propaganda conversations.
Initially, when news about a novel Coronavirus emerging in China’s Hubei province initially broke, manipulative actors spread chaos; over time, some inorganic themes and patterns stuck, levelling out—but keeping pace—with organic content.
In the following narrative discoveries, we offer a high-level forensics landscape analysis of the actors and networks contained in four of the largest system-surfaced disinformation narratives that occured within our recorded time frame related to COVID-19.
NARRATIVE 1: RELIGIOUS ANTI-MEAT SCAM
|Total Inorganic Tweets:||922,784|
|Avg Tweets Per User:||31 times|
Our system uncovered a massive spam campaign spanning 47 languages. This traffic, volume, and breadth of reach across communities was driven by one persona and his inorganic amplifiers: online religious guru, “Saint Rampal Ji Maharaj.” He has 36.2K Twitter followers and claims to be the reincarnation of Kabir (“Great” in Arabic), a 15th-century Indian mystic, poet, and saint who is a religious icon with import among followers of Islam, Hinduism, as well as Sikhism.
With some 927,908 tweets’ worth of activity amongst 29,145 users, Saint Rampal’s #NoMeat_NoCoronavirus campaign contained a remarkable 99.4% inorganic content. In this conversation, Rampal was the most amplified persona; his image, products, and anti-meat message proliferated through manipulation of his SA News Network, bot networks and (potential) botnets—and a synchronized cross-platform presence, his Youtube channel having 45K subscribers, where he regularly broadcasts spiritual messages (he can also be found on Facebook [227K+ followers], Instagram [24K+], LinkedIn, Pinterest, Tumblr, and his own domain, http://supremegod.org).
Not only was the majority of this activity rife with lies, fundamentalist propaganda, and plain misinformation, but it was incredibly inorganically amplified. By February 2nd, in tweets mentioning “#NoMeat_NoCoronavirus” and across user profiles active around the hashtag, we found indications of manipulated amplification, with the conversation containing this particular hashtag reaching at least 919,664 tweets. This leveled off after the second day of February, suggesting that the “guru” had successfully spread his antimeat cause with measurable results—and a measurable trail of 29,145 users who tweeted an average of over 31 times within the campaign.
Given the staggering amount of inorganic volume of the #NoMeat_NoCoronavirus conversation, this organized disinformation campaign likely had multiple goals. Religious bias was exploited by the #NoMeat_NoCoronavirus campaign, and in a manner which would guarantee wide exposure beyond any single spiritual community or national boundary. This amplification campaign had appeal across several religious communities, namely Hindus, Muslims, and Sikhs. Several angles in our data support this; one prominent case was the memetic use of the name and iconographic image of Kabir without direct mention of Rampal. The name “Kabir” was mentioned 32,784 times in English alone within the feed.
As this diagram shows, inorganic content (red) makes up almost all conversations when compared to organic content (green).
“Saint Rampal’s” accounts were deployed in an opportunistic manipulation campaign in the early days of the coverage of what came to be called COVID-19 by the WHO, reaping the benefits in clout and dissemination.
Shared a cumulative 14,411 times among 13,149 unique user accounts, the five top-volume–ranking images—one is presented above—all promoted Sant Rampal Ji Mahahraj, his “free sacred book” Gyan Ganga, and solicited personal details via a Whatsapp group. Some of the imagery also cited screenshots of legitimate news outlets like The Guardian—suggesting an intentional forging of antipathy towards Chinese food practices across Western audiences as well as spiritual constituencies further East (i.e., crafting a shared vocabulary).
The top two user accounts amplifying #NoMeat_NoCoronavirus both mention Kabir in their profile bios; this alone provided exposure to the name Kabir: @BituDaas appeared in the feed 1,548 times, and @Parveen02686575 1,359; both accounts were started in 2019. These two accounts are likely bots, and certainly inorganic propagators of this Saint Rampal’s campaign
NARRATIVE 2: DELEGITIMIZING CHINESE CULTURE
|Total Inorganic Tweets:||204,397|
|Avg Tweets Per User:||1.8 times|
Early during the outbreak there were unverified claims that COVID-19 likely emerged from “wild bats” and conjecture that Chinese people were eating “bat soup” was rampant across social media; less so, but nonetheless, tropes about “killing pets,” articularly dogs, were joined by calls to #boycottchina.
More than just the alleged “barbarity” of Chinese culture fueled campaigns to vilify the Chinese government. This was often done through claims, often posted with unverified and decontextualized video “proof,” that Chinese state officials were “welding” apartment doors shut, closing off cities by filling tunnels with dirt, and spraying unknown substances.
A total number of 228,569 users and 430,319 tweets were detected by our system on the theme of delegitimizing Chinese culture, with a BBMI of 47.5% of that content being manipulated.
A number of clout-chasing race-baiting posts appeared in the feed of chatter around Chinese food practices, some already well known like Paul Johnson Watson, Alex Jones’ Infowars editor-at-large. Users like Watson (@PrisonPlanet) are well known to observers of the international white power movement, but appeared only 3 times in our feed of Chinese-culture–denigrating conversation. Our system surfaced the larger patterns and waves that users like @PrisonPlanet are manipulating in this ongoing news media event.
Ongoing inorganic activity from known Hong Kong independence campaign accounts seem to be the biggest driver of the increased volume around February 9, the day following a two-day string of major-outlet coverage of the potential for the virus to impact the movement.
It is possible that the protestors globbed onto this media activity, particularly Twitter content pertaining to the totalitarian nature of mainland China’s quarantines, inserting themselves into a grander, emergent antiChinese government narrative. Hashtags like #HongKongProtests and #HKPolice show major spikes on February 9, one day after several articles came out from NYT, Bloomberg, Vox, and more.
An unverified deluge of decontextualized video and image media shook confidence in the Chinese government’s handling of the coronavirus.
The food politics of Saint Rampal’s anti-meat religious campaign figured centrally in manipulated content around COVID-19 broadly. Out of the conversational landscape delegitimizing Chinese culture, #NoMeat_NoCoronavirus-associated material ranked highly again, with even the most-shared image (1,901 times among 1,725 users) in this feed coming from the campaign, indicating the global anti-Chinese sentiment animating much of the contours of manipulated conversation.
NARRATIVE 3: BIO-WEAPONS CONSPIRACY
|Total Inorganic Tweets:||163,600|
|Avg Tweets Per User:||1.6 times|
Various iterations of a conspiracy theory regarding COVID-19 and bio-weaponry were surfaced by our system. Claims were made that “two Chinese nationals” stole COVID-19 from Canadian defense forces; instigated by a preprint microbiology paper on the virus’ protein structure, several online factions averred that there were “HIV insertions” indicative of a biological weapon; and, dovetailing claims that a “new world order,” bent on enacting a global “depopulation” scheme, laid the blame for the virus on “Israeli biological warfare.”
A total number of 428,534 users and 709,349 tweets were detected by our system on the theme of COVID-19 and bioweapons/biological warfare, with a BBMI of 23% of that content being manipulated. We found a large volume of tweets with only a moderate percentage of inorganic content. Although scoring lower on the Blackbird Manipulation Index, the total raw volume of inorganic content is still quite high and mostly multipolar in origin, with the actors accused of manufacturing the virus in these conspiracies ranging from the United States, to Israel, to China
The primary theory claimed that the virus not only emerged in the city of Wuhan in China’s Hubei province, but was intentionally created in the city that was hardest hit thus far by the viral infection. As the NYT noted Monday morning, “The conspiracy theory lacks evidence and has been dismissed by scientists
But it has gained an audience with the help of well-connected critics of the Chinese government such as Stephen K. Bannon, President Trump’s former chief strategist. And on Sunday, it got its biggest public boost yet.” That boost was from Arkansas Republican Senator Tom Cotton—on live television, and our system detected the many loose ends tied together in his public planting of the conspiratorial seed Sunday, February 16th. This is a keystone example of a conspiracy being pushed into the mainstream by a person of influence while dramatically expanding reach and legitimacy.
Garnering sizeable tweet volume, but only moderate percentages of organic content (at its February 4th peak, 28%), it is likely that this conspiracy polluted a number of organic Twitter communities.
Nonetheless, the total volume of organic content is very high in this feed, and thus worth exploring in more forensic detail, particularly activity around the hashtag #MakeADifference and conversation involving HIV advocacy as it pertains to the Thai television program ch3thailand.
By Sunday, February 16th, Arkansas Republican Sen. Tom Cotton gave a prominent voice to the theory that, specifically, China had developed the novel coronavirus as a biological weapon. A number of versions of the theory were already in the air, amplified organically and inorganically by users across 56 languages.
(Top) Before reaching such offline prominence, our system detected that Stephen Bannon’s version of the bio-weapons conspiracy had already garnered traction in a Citizens of the American Republic Youtube upload of Bannon’s podcast War Room, circulating on Twitter, appearing 1,620 times in our feed on delegitimizing Chinese culture. (Bottom) The far-right independent media outlet Zero Hedge (appearing in 825 tweets among 800 unique users) gave air to claim that the Wuhan BSL-4 lab was the origin of the virus by hosting Dr. Francis Boyle, where he alleges it was developed as an offensive biological weapon, perhaps even by the United States.
NARRATIVE 4: TENCENT NUMBERS
|Total Inorganic Tweets:||47,135|
|Avg Tweets Per User:||1.4 times|
A chart depicting a purportedly leaked or otherwise mistakenly posted internal “Epidemic Situation Tracker” at Tencent, China’s biggest tech and media conglomerate, peppered a variety of organic and inorganic networks. The numbers in this chart were much higher than what the Chinese government was providing regarding the spread of the illness across China. Soon after these depicted numbers started making their rounds, Tencent released a statement saying that the image being circulated was doctored, yet suspicions over the veracity of Chinese state numbers remained.
A total number of 75,511 users and 106,470 tweets were detected by our system around the topic of the purportedly leaked numbers from Tencent, with a BBMI of 44% being manipulated content. Rampant distrust of the Chinese government, paired with manipulated campaigns delegitimizing Chinese culture and people, combined with the insidious utility of a “falsified leak” that rapidly spread across social media, all combined in a real-time display of the potentially perilous macroeconomic effects of human cognitive bias across networked propaganda campaigns.
Suggestive of the chaotic and uncertain backdrop, the “leaked numbers from Tencent” narrative proliferated organically as well as inorganically, with manipulative campaigns and efforts keeping relative, parallel pace, with a spike around February 5th, the day the Tencent story broke, a period when overall inorganic activity increased in aggregate across all our system-generated narratives.
Upon the “leak,” global news sources and personages, both fringe and mainstream, reported on the unsettling, but fake numbers. Coverage and commentary was ran by news outlets as diverse as mastheads like Taiwan News and personages from One America News Network. In actuality, the WHO announced on February 5th that there were 24,363 confirmed cases and 491 deaths in China.
Blackbird.AI has demonstrated the significance and impact of disinformation across public and private sectors. Threat actors have created a new template for manipulation, fear mongering, xenophobia, and more. Governments, organizations, entire industries, universities, and individuals have all been affected by the myriad of disinformation being spread online, often affecting the economics, security, comfort, perceptions, and predispositions that are the very foundations of everyday life.
The impact of manipulation campaigns around COVID-19 go far beyond healthcare due to the unprecedented nature of this global event. Blackbird’s Manipulation Index (BBMI) provides a significantly deeper understanding of the disinformation landscape, enabling key decision makers to make consistently smarter, more informed decisions.
With inorganic narratives stemming from across the globe, it’s obvious the time has come to identify, analyze, and combat these abuses of communication. Understanding intent enables us to identify the goals of these campaigns and minimize their impact. At the end of the day, the old adage stands true: “knowledge is power.” The ability to measure and understand the differences between what is accurate and what is not allows us to minimize potential harm while enabling positive change