For the last three years BLACKBIRD.AI has been documenting and exposing how pervasive digital influence campaigns built on disinformation, conspiracy, and politically polarized rhetoric have effectively come to function as a cyberattack on human perception. The long-term result is the evolution of a virtual army of citizens increasingly detached from a baseline of reality, to the point that real-life violence in the name of these distorted facts and falsehoods has developed into a critical national security threat.
The world caught a glimpse of what sustained distortion of reality can look like on January 6, when thousands of protestors-turned-rioters converged on the U.S. Capitol with the overarching aim to challenge the ratification of Joe Biden’s November 2020 election victory. For some, inflicting physical harm on lawmakers, government employees and security forces was a concurrent goal. This is evidenced by the firearms, tactical gear and hostage-taking paraphernalia found on members of the mob; pipe bombs placed in the U.S. Capitol’s vicinity, and overwhelming volume of violent online messaging around a ‘second civil war’ in the weeks preceding the event. Biden’s win is primarily contested on the grounds that the Democratic Party unlawfully hijacked their way to a stolen victory via widespread voter fraud and corruption. No evidence for these claims has ever existed, yet violence indeed came to pass. Blackbird examines how this paradoxical disconnect between belief and reality has come to be the lethal hallmark of the disinformation age in the U.S. today — one, which if recognized and understood more comprehensively beyond, could have potentially changed the course of how events on January 6 unfolded. The price is too high to permit disinformation-based risks to be dismissed as mere online clamor and Internet curiosities; understanding the gravity of intent behind the instigators of violence in Washington DC is now paramount to national security.
Ideal Conditions for Disinformation
The transition from online disinformation to offline violence does not happen overnight. Blackbird.AI has monitored how our information ecosystems are being systematically polluted to the point where individuals no longer hold trust in what is reported by established media outlets, instead choosing to only consume content selectively aligned with their beliefs. The political, social, economic and cultural conditions that fed into the U.S. Capitol riot are thus inextricably linked to the organization and regulation of our digital environments. Understanding these fundamental problems is the first step in tackling the challenges raised by disinformation, with the critical aim of preventing further instances of bloodshed fuelled by polarized sentiments of animosity and antagonism.
Echo chambers and the conspiracy rabbit hole
BLACKBIRD.AI has ntagpreviously examined how the architecture of social media platforms axiomatically produces highly personalized echo chambers that function as powerful mechanisms for confirmation bias and groupthink. Social media and content platform algorithms are geared to maximize engagement by promoting articles, videos or follower suggestions designed to resonate with users’ pre-established interests and preferences. Continued exposure to the same ideas via newsfeeds and online communities artificially validates beliefs and ideals given the overrepresentation of corroborating sources and interaction with like-minded users as seen in the recent upswing in momentum around anti-vax groups and QAnon supporters.
This is particularly concerning in the case of disinformation campaigns that seek to elicit a specific emotional response or promote alternate narratives based on falsified or distorted claims. Crucially, the disinformation echo chamber manipulates existing inclinations that represent the path of least resistance: if a user engages with content that alleges mail ballots are highly susceptible to voter fraud, it is likely that content professing the invalidity of the November 2020 election will be later pushed their way.Disinformation narratives tend to compound, conditioning the user to accept interlinking theories and conspiratorial explanations that exploit the human proclivity to seek affirmation of personal beliefs.The collective validation of the echo chamber produces an illusion of infallibility, rendering it increasingly difficult for individuals to question or shake off its shared values. Clear evidence to the contrary is unlikely to penetrate this emotional conviction; as one intruder inside the U.S. Capitol on January 6 put it: “[Congress] works for us. They don’t get to steal it from us. They don’t get to tell us we didn’t see what we saw.”
Once disinformation is embraced as a normative vehicle for mediating the world, there is no limit to which reality can be reinterpreted. With regards to the U.S. Capitol attack, this begins to explain why supporters of the QAnon hoax were so well-represented among Stop the Steal agitators and why fake news mogul Alex Jones was among the funders of the ‘Save America’ rally that took place prior to the riot. Nor is it surprising that numerous public figures who support election fraud claims have also been quick to denounce the Capitol mob’s violence as a false flag operation by Antifa psy-ops agents designed to discredit an otherwise peaceful protest. Radical Leftism as the exculpation for right-wing embarrassment is a familiar trope; its deployment now running thin as media outlets and security agencies have been quick to identify by name lead agitators as prominent neo-Nazis, militia members, and QAnon adherents. They are known by their established online presence and documented appearance at white supremacist marches, anti-lockdown protests and Trump rallies over the years (or, in one case, his current appointment as a Republican member of the West Virginia House of Delegates). Politicians blaming Antifa for Wednesday’s violence is thus as illogical as it is concerning — disinformation as a political weapon has evidently drawn new epistemological battle lines, where considerations such as facts and feasibility are cededto the pursuit of desired political outcomes.
Online Disinformation to Offline Violence
Against a backdrop of growing political polarization — currently, at an all-time high in the United States — disinformation has been able to flourish as both a product and contributing factor. In its own reactionary and reductive way, the rabbit hole of conspiracy offers existential explanations and emotional validation beyond that which conventional U.S. politics has failed to deliver, particularly following the radical upheavals precipitated by the COVID-19 pandemic. Within these antagonist realities, the capacity for disinformation to re-adjust collective reference points for interpretation of the external world has profound consequences for processes of self-identification and subject formation. Digital connectivity between online communities of like-minded individuals creates a sense of camaraderie and belonging, which over time, can engender new relational ties based on ideological orientation. Those who subscribe are allies in the search for truth, while those who do not begin to appear as the binary and unknowable ‘Other’, anathematic to the group’s shared values and self-evident objectivity. Implicit within this is a fundamental process of dehumanization. By removing what is relatable between us as human beings is to chip away at our capacity for empathy and compassion. In the case of disinformation based around polarized rhetoric — fundamental to which are the exclusionary notions of white supremacy, xenophobia, hyper-nationalism and conservative readings of gender and sexual identities — vilification of those who do not conform is an instinctive progression that reproduces the ‘Other’ as a justifiable target for righteous outrage
A playbook for violence written in full sight
Disinformation aids the acceleration towards that critical mass by providing a bespoke focal point for feelings of frustration, disenfranchisement, and anger. Freed from the strictures of facts and reality, disinformation actors are able to sell their own interpretations of the world that fall closest in line with set ideals, convictions or future goals.Not every individual that consumes disinformation online becomes a conspiracy theorist, and not every conspiracy theorist is compelled to, or wants to act on their beliefs in the offline world. But, if given enough time, critical mass — where enough people fit the criteria for it to matter — will eventually be reached. This is where the dangers of disinformation are most fully realized.
If we consider the events of the last four years in U.S. history the playbook for January 6 has been written in full sight, where the foundations for an incident like the U.S. Capitol riot have been slowly prepared. November’s election set off nine weeks of momentum behind frenetic claims of widespread electoral fraud and landslide uncounted Republican votes. This was preceded by months of groundwork in which the unreliability of mail ballots and Democratic intention to manipulate a power grab was repeatedly forwarded by a number of public officials and media outlets, then promulgated over and over on social media. Looking further back, recent years are littered with instances of violent far-right rallies and attacks on pro-racial equality movements in cities like Charlottesville and Portland. The visibility of anti-government militias in public life has sharply risen, evident in the storming of state capitol buildings by armed agitators in Michigan, Idaho and Oregon, and the recently foiled plot to kidnap a Democratic governor and precipitate a second civil war on the grounds of perceived state overreach. Murder, attempted kidnappings and armed invasions in the name of QAnon have prompted the FBI to designate the provocative right-wing conspiracy theory as a domestic terror threat — a fact which did not prevent the successful appointment of the first open QAnon supporter to Congress in November’s election.
Each of these incidents builds up more dynamism, raising the profile of these actors and their causes in the ensuing maelstrom of online discussion which breathes new oxygen into the conspiracy mythos as an ever-evolving ‘Choose Your Own Adventure’ for the disinformation age. Without practical measures to address the root of these threats or robust consequences for antagonists involved, little deterrence for future violence is provided. Accordingly, despite the FBI’s recognition of far-right and white nationalist extremism as a “national threat priority”, on January 6 sparse Capitol security enabled rioters to gain entrance with relative ease to a building where senior national lawmakers — including the direct line of succession to the presidency — sat in a high-profile congressional session. Recent moves by social media companies to remove disinformation and hate speech from their platforms have similarly fallen short in their scope and expediency, with plans for violence in Washington DC laid online between networks of far-right communities. The writing for January 6 has been on the wall, but few have heeded the warnings.
Where do we go from here?
An FBI investigation is currently underway to identify rioters who preplanned for violence in Washington, with a number of individuals — many of whom live-streamed or photographed their participation in the assault — now in police custody. At the time of writing, self-professed ‘free speech’ social media network Parler has been forced offline after Amazon’s cloud platform AWS cut its hosting services to the company, citing its failure to moderate violent content, including that which directly contributed to incitement of the U.S. Capitol riot. Losing Parler (albeit perhaps only temporarily) may come as a blow to the far-right communities that found common cause there, but it is far from the only online space that can play host to these conversations, particularly within the closed networks of social media messaging. Declarations that armed insurrection will return in force on Inauguration Day on January 20 are already circulating online, including a Million Militia March to follow three days of protest in the U.S. capital and armed demonstrations at all 50 state government buildings around the country.
At BLACKBIRD.AI, we battle the onslaught of disinformation as soon as it arises. Disinformation is often crafted to drive motivated and conflicting narratives at scale, but these risks frequently arise from below the waterline of typical toolsets’ capability for social listening and sentiment analysis. A new generation of risk vectors manifested in the form of low-level persistent threat actors, distributed coordinated campaigns, and the dark web activities and blogs from organized extremist movements only scratch the surface of what is to come in the next few years as arsenals continue to evolve.
Only through successful identification will the threats posed to our information ecosystem be more fully understood, enabling organizations and national security organizations to provide effective mitigative action to address these deeply polarizing and hostile challenges at their source. Failed insurrections are often followed by successful ones — effectively recognizing the threat posed by disinformation in our political and information ecosystems is the first step to working towards a more inclusive and peaceful future.
Blackbird.AI helps organizations detect and respond to threats that cause reputational and financial harm. Powered by their AI-Driven Narrative & Risk Intelligence Constellation Platform, organizations can proactively understand risks and threats to their reputation in real-time. Blackbird was founded by a team of experts from artificial intelligence, and national security, with a mission to defend authenticity and fight narrative manipulation. Recognized by Forrester as a "Top Threat Intelligence Company," Blackbird's technology is used by many of the world's largest organizations for strategic decision making
BALANCING THE COMPLEXITIES OF ONLINE DISCOURSE
While all these recommendations seem to be sound, the likelihood that these measures can be agreed upon and implemented are becoming increasingly less likely in the U.S. and around the world. In fact, we have been moving in the opposite direction. Platforms have begun to roll back access for research communities, decrease moderation around misinformation, or strike down moderation altogether in the name of freedom of expression. The very notion of banning a popular platform in the U.S. would have seemed unthinkable a few short years ago, with organizations like the ACLU strongly voicing that a ban on TikTok would violate the First Amendment.