From botnet attacks capable of taking whole countries offline, to hostile foreign info ops eroding election integrity, bots appear to have infiltrated every corner of the internet as the digital weapon of choice. Most recently, bots became a topic of speculation following Elon Musk’s claim that one-third of all visible Twitter users are “false or spam accounts”, which underpinned his attempted withdrawal from a 44-billion dollar deal to purchase the social media platform.

Readers are probably already aware of what a bot is: a software application that can perform simple automated tasks at a much faster rate than a human can manage. Bots can be programmed to artificially boost the visibility of online content, by synchronizing thousands of posts, or gaming online search rankings by activating links and advertisements. Bots can be infected with malware to be used in high-traffic Distributed Denial of Service (DDoS) attacks that disrupt access to websites or servers. They can be used to impersonate humans in phishing attempts designed to trick victims into providing sensitive information.

In short: bots are bad news.

It’s no wonder, therefore, that a frequent organizational or corporate concern is becoming the target of a bot attack online.

Just ask PayPal, who faced serious financial and reputational loss after admitting 4.5 million bot accounts exploited monetary reward schemes for new users in 2022. Indeed, a recent report by cybersecurity firm Kasada found that of companies surveyed who had experienced at least one bot attack in the last year, 39% reported a loss of 10% or more of their revenue as a direct result of the incident.

However, bots are not the only risk that’s out there. And by over-indexing on the threats posed by bots, companies and organizations run the risk of ignoring other, equally harmful and disruptive forms of digital or information manipulation attacks that might come their way.

In essence, bots are often perceived as dangerous because they are able to undertake large-scale tasks which humans cannot. What tends to be forgotten, however, is that those things which humans—not machines—are best at, are equally as deserving of our concern.

Take disinformation, for example, which was recently named by the European Union Agency for Cybersecurity (ENISA) as one of its top 10 emerging cybersecurity threats for 2030.

Humans might not be as fast as bots at spreading content online, but our strength lies in our authenticity of meaningful and contextually relevant communication that can inform and persuade. When it comes to disinformation or extremist rhetoric, connecting with other seemingly like-minded people whose messaging resounds with your own beliefs or fears, can often be a compelling means by which alternative ideas and narratives are introduced. Although the actual on the ground impact has been debated, bots were deployed by the Russian state to influence the outcome of the 2016 US presidential election alongside so-called ‘web brigades’; thousands of humans paid to spread narratives of political and social discord on social media. These real-life Disinformation as a Service (DaaS) operators were deemed as “fluent in American trolling culture” by a report commissioned by the US Senate Intelligence Committee, indicative of their success in infiltrating and engaging with native populations targeted according to their political affiliation, socio-economic background, age, and race.

It is this human credibility of communication and connection that malign actors often seek to co-opt, in a way that bots may struggle to replicate. For example, in 2021 a number of European YouTube, Instagram, and TikTok influencers reported receiving financial offers from a Russian state-linked communications firm to discredit Pfizer’s COVID-19 vaccine as unsafe to their online followers. For information manipulation actors, these fan communities represent a convenient short-cut to pre-established, highly-responsive audiences. Human influencer ‘clout’ is something easier to purchase, rather than synthetically recreate from scratch.

Furthermore, human users seeking to manipulate social media conversations are often much harder to detect, given the authenticity of their behavior. Bots often operate in irregular ways that deviate from real-life patterns of discourse, and can be detected with the right toolkit, or sometimes, just a glance at a Twitter profile full of spam posts, a randomly generated handle, and no profile picture. Although deepfakes are a frequent topic of discussion as technology evolves to become much more realistic (and dangerous), they do not yet represent nearly as much risk as conversational data. Anyone who has ever attempted to properly converse with a chatbot in the same way as a person—be it an innocuous virtual assistant, or a dating app phishing attempt—will be aware of their general inability to truly pass the Turing Test.

The end result is a situation where it is often easy to create volume using bots (via the rapid publication of posts around a certain topic), but generally significantly harder to generate legitimate human engagement from this content. This doesn’t mean to say that it does not happen – but rather that the meaningful presence, reach, and impact of what bots say online can sometimes be drastically overestimated and must be taken in concert with a variety of other signals and indicators.

Behind all of this is the algorithmic infrastructure that drives social media platforms. Content recommendation features, trending aggregators, or targeted advertisements can be easily exploited by bad actors—regardless of whether they’re bot or human—to maneuver online discourse. This is underpinned by regulatory provisions that all too often struggle to sufficiently enforce the removal of antagonistic operators and content. It is therefore both the online and offline anatomy of our social media ecosystem that we need to understand, in order to best meet these challenges.

All of this does not mean to say that bots pose less risk than expected. As the examples cited at the beginning of this article prove, that is certainly not the case for many companies, organizations, and national governments. And as developments in hyper-realistic generative AI continue to advance, the threat posed by digital technologies deployed to disrupt and manipulate are growing exponentially.

Rather, the message is that bots are only one part of a complex landscape of information manipulation activities and risk. Bots might grab the headlines, but human threat actors skilled at narrative manipulation also pose a significant threat to digital security—albeit in a different form—just as bots do. Understanding the range of digital threat actors and their strengths, weaknesses, and motivations at a wider, holistic level is therefore paramount to guarding against them.

Roberta Duffield, Director of Intelligence
BLACKBIRD.AI protects organizations from narrative attacks created by misinformation and disinformation that cause financial and reputational harm. Powered by our AI-driven proprietary technology, including the Constellation narrative intelligence platform, RAV3N Risk LMM, Narrative Feed, and our RAV3N Narrative Intelligence and Research Team, Blackbird.AI provides a disruptive shift in how organizations can protect themselves from what the World Economic Forum called the #1 global risk in 2024.