(iStock)


The tech industry has a problem, and its name is Surveillance Capitalism. So goes the premise of Netflix’s recent documentary The Social Dilemma, which delivers a compelling account of how the business model of a handful of corporations is transforming our minds and societies through the exploitation and sale of users’ private digital data. Blackbird.AI considers how disinformation fits into this process. We reveal how “fake news” is a critical force multiplier in the deliberate manipulation of our cognitive biases in the search for private corporate profit. The end result is a growth cycle of disinformation, whereby the concepts of Truth and Fact are ceded to the affirmation of personal emotion, in doing so entrenching growing socio-political polarization within our communities. We believe that the pursuit of information integrity is thus more relevant than ever. Upheld by callous digital infrastructures indifferent to its human cost, disinformation holds the potential to detrimentally alter the trajectory of our collective future on a mass scale.

What is surveillance capitalism?

“Great predictions begin with one imperative – you need a lot of da

Shoshanna Zuboff, The Social DilemmaProfessor Emerita, Harvard Business SchoolAuthor of The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power (2019)
ta.”

In order to understand the challenges posed by disinformation, we first need to understand the mechanics of ‘surveillance capitalism’ – a concept proposed by academic (and Social Dilemma talking head) Shoshana Zuboff. In brief: large technology corporations such as Google, Facebook and Amazon have expanded their capacities for digital data collection and analysis as more and more human activity moves online. Bolstered by a dearth of regulatory oversight, these corporations now generate billions of dollars in revenue from the sale of metadata extracted from their platform users to increasingly diverse third parties. These range from advertising firms, credit companies and insurance brokers, to governments, national security agencies and political campaigners.

Empowered by new algorithmic technologies, this metadata provides an unprecedented level of insight into users’ personal lives, experiences and personalities which can be used to anticipate, encourage and modify certain future actions on an individually calibrated level through psychographic profiling.This is Zuboff’s surveillance capitalism: a new logic of capitalist accumulation built upon the commodification of and trade in human futures, enabled and upheld by digital technological infrastructures.

(Shutterstock)


How do digital technology platforms encourage human engagement?

“Social media isn’t a tool that’s just waiting to be used. It has its own goals and it has its own means of pursuing them by using your psychology against you.”
Tristan Harris, The Social Dilemma
Founder of the Center for Humane TechnologyFormer design ethicist, Google

For the tech companies who stand to profit the most from surveillance capitalism, their needs are simple – the more digital data that can be collected from platform users and their online activities, the more revenue generated by its sale. Engagement is therefore the fuel that drives this cycle, from every Google search, YouTube clip watched, meme re-Tweeted, Amazon item purchased, or Facebook message sent. Creating an environment where individuals willingly participate in the divestment of their personal information through these digitally-networked activities is therefore key to maintaining a steady stream of lucrative income for technology corporations.

For instance, psychological tricks deliberately built into the architecture of online platforms successfully exploit our mechanical impulses to engage with our devices. Push notifications deliver dopamine hits that keep us absentmindedly checking our cell phones or scrolling social media feeds in a state of perma-engagement that ensures a ready source of data to be harvested for profit.

Perhaps more concerning, however, is the concept of algorithmically generated content recommendation. The premise is simple: humans, whether online or offline, enjoy engaging in activities or interacting with people that reflect their own pre-established interests and ideals. Data mining builds up a profile of an individual’s personal habits, beliefs and personality which algorithms use to predict and identify what online content—be it a news article, product advert or Facebook group suggestion—a user should be exposed to next, in order to maximize the likelihood of their engagement. If successful, the algorithm will continue to recommend similar or related material, prioritizing content that has proved popular with similar demographics.

Figure 1: How surveillance capitalism perpetuates its profit cycle based on ways to maximize human engagement with online content in order to collect user data (Blackbird.AI)


What happens when our engagement is consistently manipulated in this way?

“We all simply are operating on a different set of facts. When that happens at scale, you’re no longer able to reckon with or even consume information that contradicts with the world view that you’ve created.”
Rashida Richardson, The Social Dilemma
Director of Policy Research, AI Now Institute

As time goes by, algorithmic content recommendations that only promote self-referential material can saturate our online ecosystems. Our newsfeeds begin to function as highly personalized echo chambers of validation which display only a narrow range of interests and ideals. The result is a powerful mechanism for confirmation bias: the tendency for our brains to seek the path of least resistance by selectively interpreting information that resounds with prior beliefs and values. In the offline world we still encounter conflict, contradiction and disappointment through the messiness and friction of everyday human existence; in the algorithmically determined online vacuum of homepages and newsfeeds, we do not. Surveillance capitalism has no need for balanced political representation in its content recommendations; it seeks only data.

This has serious consequences for both individual psychology and the wider socio-political landscape. Without exposure to alternative ideas and the opportunity for debate, the political viewpoints of the echo chamber are gradually established as normative realities. This is upheld by the mutual reinforcement and camaraderie that digital connectivity permits. As communities crystallize, the phenomenon of group-think runs risk of emerging. The collective validation of the group produces an illusion of infallibility, rendering it more difficult for individuals to question its shared values. Anyone on the outside becomes the Other, separated by an algorithmic chasm that artificially vilifies other points of view – indeed, political polarization is currently at an all-time high in the United States. This holds the potential to go beyond mere political rivalries. Throughout history incidences of group-think have enabled dysfunctional norms of conformity and decision-making processes instrumental in acts of violence, such as torture at Abu Ghraib, mass suicide in Jonestown and genocide in Rwanda. Online echo chambers may not necessarily lead to massacres, but the idea that seeds of resentment towards those who are different to us can be sown from within our own homes, changing the ways in which communities view even their own neighbors, is a sobering thought.

How does disinformation fit into these processes and what is the human impact?

“We’ve created a system that biases towards false information – not because we want to, but because false information makes companies more money than the truth. The truth is boring.”
Sandy Parakilas, The Social Dilemma
Senior Product Marketing Manager, AppleFormer Chief Strategy Officer, Center for Humane Technology

With regards to disinformation, surveillance capitalism has hit the motherlode. A 2018 MIT study on disinformation and social media states that fake news spreads six times faster than real news online. It goes on to explain that this occurs chiefly as a result of the heightened novelty and emotional value of manipulated content. This drives higher levels of human engagement in comparison with its truthful counterparts. Unsurprisingly, given its affective and subjective properties, falsified political news tends to diffuse significantly further, faster and deeper than falsified information on any other subject. And against the authors’ expectations, humans, not bots, are likely to be the major agents in its enhanced rate of dissemination. In short: humans appear to instinctively prefer fake news over the truth, as a result of our susceptibility to emotive external manipulation.

Figure 2: How disinformation enables surveillance capitalism to function more effectively by encouraging higher levels of user engagement with online content (Blackbird.AI)


For the algorithm that favors content generating high engagement, disinformation is Big Tech’s moneymaker. The attention economy’s business model simply cannot include both honest reporting and unfettered profit. Blackbird.AI has identified several concerning issues that arise from this, from detrimental effects on the human ability to understand and relate with others, to ramifications for public safety and national security.

1. Algorithmic content recommendation purposefully deploys disinformation into digital environments optimized for it to succeed.

The very act of content recommendation functions as an act of legitimization, giving the appearance that its suggestions have been sanctioned by a platform as worthy of attention. Furthermore, manipulated messaging is purposely amplified towards users and demographics assessed to be predisposed to accept it at face value. For example, the Stanford Internet Observer’s Renée DiResta explains how in 2016, Facebook recommended groups devoted to the debunked Pizzagate hoax to users identified as interested in (and thus susceptible to) conspiracy theories. That Pizzagate’s claims—including Hillary Clinton and the Democratic Party’s control of an international child sex trafficking ring—held no basis in fact was unimportant. Simply, the most vulnerable in our societies are deliberately and disproportionately targeted by tech corporations for profit. The United States already maintains laws that prohibit analogous activities such as false advertising, aggressive marketing tactics and exploitation – why should the targeted dissemination of false information online not be subject to the same legal restrictions?

2. Algorithmic content recommendation may function in unexpected ways which means the spread of disinformation is difficult to comprehend and predict.

For example, a recent media report reveals how elements of the far-right QAnon conspiracy theory have recently rebranded it as an aesthetic lifestyle trend among wellness and spiritualist communities; a seemingly unlikely home for a hoax based in sensationalist calls to arms against a global deep-state of cannibalistic pedophiles, steeped in racist and anti-Semitic rhetoric. Or, social media users may share outlandish conspiracy theory content without believing it to be true, perhaps as a way of raising awareness of or disparaging its baseless claims. The algorithm sees only a piece of content that has provoked engagement and will continue to promote it to other lookalike users on this basis.

A man holds a Pizzagate placard in Washington DC, March 25, 2017 (Shutterstock)


3. Disinformation can dangerously radicalize spaces on the Internet and the users that frequent them.

As established above, online echo chambers are easily aggravated into aggressive forms of socio-political polarization, based on repeated self and group validation. Given that disinformation often tends towards heightened novelty and emotive content, this only shifts the poles even further apart, in a shorter amount of time. The ability of the QAnon conspiracy to galvanize extremist elements through its exploitation of extant socio-political fissures is illustrative in this regard. Within the US, its calls for true patriots to defend their individual rights have witnessed growing numbers of QAnon supporters among far-Right groups that participated in acts of violent civil unrest against anti-racism protesters during the summer of 2020. As QAnon expands its international base these tactics are being replicated elsewhere; pre-existing radical political fringe groups, conspiracy theorists and disaffected communities the world over have proved fertile recruiting grounds for the hoax in recent months.

4. Repeated exposure to the echo chamber of disinformation has a material shift on the value we place on Truth.

Fundamentally, the descent into the echo chamber is a process that provides new epistemological ways through which digital content consumers mediate the external. We build new worlds from our cell phones outwards, and in doing so new identities, communities and ideologies are formed around specific views of reality, each convinced of their own self-evident authenticity. Emotional validation begins to take precedence over causal enquiry and rational logic. This produces its own truths that are answerable to no one, even when confronted with reasonable evidence to the contrary.

CNN interviewer speaks with a Trump supporter at a rally in Bemidji, MN, September 24, 2020 (CNN).

The two are discussing a falsified video of US presidential nominee Joe Biden appearing to fall asleep in a television interview. The clip of Biden went viral earlier that month after being tweeted by the White House social media director Dan Scavino and has since been widely proven as a hoax.

Reporter: So an article there is saying that [the video] was faked, but it looked real, right?
Interviewee: I mean, I definitely wouldn’t doubt that it would happen.
Reporter: Even if it is fake, does it change your opinion of Biden?
Interviewee: God no. You gotta sift through it; I missed that one. But, it was a good laugh, it was a really good laugh. And like I said, I wouldn’t doubt it.

The above exchange between a CNN reporter and interviewee on the veracity of the manipulated Joe Biden video is illustrative in this regard – what we want to be true is sometimes more important than what actually is the case. ‘Truth’ is ceded as a self-serving relative concept, whereby our cognitive biases seek validation of the personal feelings we hold dear as proud manifestations of our agency. That this ‘sense of self’ is the same intangible notion that is exploited and traded for profit under surveillance capitalism is just another hidden irony.

Conclusion: Where do we go from here?

Typical advice as to how to address the problems posed by surveillance capitalism often includes limiting our childrens’ screen time; fact-checking our sources; or following people we don’t agree with on Twitter. The most worrying thing? That anyone who takes this on board—or is reading this now—already has some interest in how digital technology and disinformation can negatively impact human lives, and are likely to have already adjusted their online habits. Accordingly, it is less probable that tech corporations will target them with fake news and incendiary content. This is reserved for vulnerable users least likely to recognize their own manipulation or the growing influence of the echo chamber on their psyche, inhabiting corners of the Internet the rest of us do not see. That is, at least, until machine learning—on its constant trajectory of improvement—hits upon the exact personalized formula for disinformation that resonates even with the most cautious of users.

Blackbird.AI thus posits that encouraging digital platform users to diagnose their own cognitive biases is a flawed solution. Conceptualizing the fight against disinformation as only a binary choice between True and False is similarly limited; the comfort of the digital echo chamber reconstructs Truth as a relative concept. Indeed, any solution which lays the onus on the user to outthink their way out of online manipulation risks underestimating the issue at hand: despite our individual human faults, it is the mechanics of surveillance capitalism that have created the conditions for disinformation to thrive online.

Fundamentally, the spread of online disinformation is directly aggravated as a consequence of unchecked surveillance capitalism. Simply, there is no financial incentive for tech corporations to implement regulations that factcheck or remove disinformation from their platforms, or to cease content recommendation practices, given their increased ability to spur engagement among users. The fact that real-life harm may occur as a consequence of promoting manipulated information simply decreases in importance in the face of corporate gain. Blackbird.AI proposes that the cycle of ‘targeted content recommendation – engagement – data collection – profit’ must be broken in order to comprehensively displace disinformation from our digital ecosystems. This will only occur when technology corporations no longer amplify profit using disinformation as a tool for growth, and are effectively held to legal account for their mass exploitation of human futures and well-being.

Roberta Duffield, Disinformation Analyst
BLACKBIRD.AI protects organizations from narrative attacks created by misinformation and disinformation that cause financial and reputational harm. Powered by our AI-driven proprietary technology, including the Constellation narrative intelligence platform, RAV3N Risk LMM, Narrative Feed, and our RAV3N Narrative Intelligence and Research Team, Blackbird.AI provides a disruptive shift in how organizations can protect themselves from what the World Economic Forum called the #1 global risk in 2024.