Even before a world flush with bots and Bitcoin, disinformation was wielded constantly. I first became familiar with its potency as Deputy Assistant Secretary of State for Intelligence during the Reagan administration. The KGB was ramping up a plot to convince the world that HIV/AIDS was a bioweapon developed by the Pentagon in order to erode trust in the US government abroad and from within its borders. Operation INFEKTION, as the KGB called it, was a massive effort involving fake scientific studies and journal articles, the sponsorship of foreign news outlets, and the recruitment of local agents to substantiate and disseminate the lie.
Some of these tactics, like false stories and trusted personas, likely sound familiar. They are now being leveraged by a far wider range of state-backed and non-state actors to achieve political, financial, or geopolitical gain. More importantly, engagement-driving algorithms and infinitely scalable computing resources have made what were once enormously expensive campaigns like INFEKTION far easier and faster to spin up. Those technological developments, which manifest in phenomena like bot networks and automated story generation, have changed both who is being attacked and who is attacking.
Bits of misinformation are not meaningful in their own right. The staying power of misinformation is positively correlated to the number of a constellation’s nodes to which it can connect, which is why conspiracy theories continuously characterize new individuals, groups, organizations and institutions as linked adversaries. The 2016 and 2020 elections, the COVID-19 pandemic, and the Russia-Ukraine conflict have shed light on groups within our society that are most likely to spread misinformation. On the other hand, we have very few ways of detect when or at whose hands deliberate disinformation activity takes place. We have still fewer tools to keep it from happening.
Blackbird.AI’s capability to analyze patterns in where disinformation originates, who amplifies it, and which channels disseminate it is essential for real-time mitigation and longer-term prevention. Instead of merely analyzing the fidelity of a narrative, its Constellation platform tracks the audiences, pathways, and inflection points (e.g., “going viral”) in a disinformation narrative’s lifecycle. Its Constellation platform gives would-be target organizations the power to anticipate future reputational and material risks by tracking groups who collectively manipulate information and by correlating open-source insights to criminal activity on the Dark Web.
We are increasingly seeing the convergence of interests between groups facilitating disruptive (or even destructive) cyber attacks and those facilitating the spread of disinformation. Both types of actors rely on the same information infrastructure (e.g., the Domain Name System) to reach their targets; enlist hijacked and bot-driven social media accounts as a means of gaining trust/access; and may seek to malign and extort the same kinds of commercial or government targets.
The total cost of cybercrime in 2021 amounted to nearly $6 trillion worldwide, including an explosion in the number of ransomware attacks that wreaked havoc on critical supply chains. Having warned about the likelihood of those disruptions in the nation’s first National Strategy to Secure Cyberspace back in 2003, I was hardly surprised that some entities proved vulnerable to attack. What was more surprising was how quickly attackers adopted new strategies. It took cyber extortionists very little time to realize there was no need to slap locks on a victim organization’s network. They could just as easily exfiltrate sensitive data from a target’s network, threaten to publicly disclose that data, and similarly expect a payday. The next and even lower effort step for cybercriminals is to affect reputational damage until companies pay them to stop.
The same tactics can just as easily target commercial entities’ reputations to move markets, to distract from other malicious activity like cyber attacks, or simply as part of conspiracy theories (as in the cases of Wayfair, Nike, and Dominion Voting Systems). With diminishing barriers to entry and a proliferating range of actors, we need a tool that can mitigate the spread of disinformation once it’s already out there. The capability of Blackbird's AI-driven engine to scale its analysis at the speed at which disinformation itself spreads is proof that such a needed intervention has already arrived.
Richard Clarke is an internationally known expert on cybersecurity, having been the first “cyber czar” for the US Government and author of the first National Strategy for Cybersecurity.
Blackbird.AI helps organizations detect and respond to threats that cause reputational and financial harm. Powered by their AI-Driven Narrative & Risk Intelligence Constellation Platform, organizations can proactively understand risks and threats to their reputation in real-time. Blackbird.AI was founded by a team of experts from artificial intelligence, and national security, with a mission to defend authenticity and fight narrative manipulation. Recognized by Forrester as a "Top Threat Intelligence Company," Blackbird.AI's technology is used by many of the world's largest organizations for strategic decision making
BALANCING THE COMPLEXITIES OF ONLINE DISCOURSE
While all these recommendations seem to be sound, the likelihood that these measures can be agreed upon and implemented are becoming increasingly less likely in the U.S. and around the world. In fact, we have been moving in the opposite direction. Platforms have begun to roll back access for research communities, decrease moderation around misinformation, or strike down moderation altogether in the name of freedom of expression. The very notion of banning a popular platform in the U.S. would have seemed unthinkable a few short years ago, with organizations like the ACLU strongly voicing that a ban on TikTok would violate the First Amendment.