By Blackbird.AI’s RAV3N Narrative Intelligence and Research Team

In 2023, an unprecedented surge in sophisticated information operations swept the internet, with geopolitical conflicts and cultural events becoming fertile grounds for narrative manipulation and attacks. These attacks persisted across the information ecosystem from the Israel-Hamas conflict, the Russia-Ukraine war, the evolving dynamics of state-backed information campaigns against Western nations, and a notable rise in the use of generative AI for creating deep fakes and spreading disinformation. 

The RAV3N team uncovered manipulated narratives across major global events, from geopolitical conflicts in Israel, Russia, and Ukraine to targeted disinformation campaigns in the West by CCP actors. Meanwhile, climate disinformation campaigns sought to undermine scientific consensus, and the collapse of Silicon Valley Bank exemplified the risks of narrative attacks in the financial sector. These developments underscore the growing complexity of digital information warfare and the urgent need for robust strategies to combat narrative attacks and protect the integrity of the public information environment.

The escalations and conflict between Israel and Hamas are ripe for exploitation by threat actors seeking to sow and manipulate adverse narratives online. The limited nature of on-the-ground reporting in active conflict zones has created an informational dearth - an ideal environment for narrative attacks and digitally borne harm.

A common tactic amid the conflict is leveraging generative AI to fabricate misleading images. The RAV3N team discovered this technology rapidly proliferated on popular social media sites and deep web forums. This method is particularly effective as detection methods continue to lag behind generative AI technology, and a lack of official or open-source information emerging from the conflict obscures truth from fabrication.

Many images and headlines used to announce developing situations since the initial October 7th attacks have leveraged images and text from old reporting and passed them off as current. 

The above network graph shows interactions between general users in yellow, Pro-Russian users in blue, Right-Wing users in pink, Left-Wing users in orange, and Palestinian Rights Advocates in green. This is taken from a highly manipulative narrative pushed by a concentrated cluster of Pro-Russian influencers, who diffused the narrative into several different communities until it was eventually co-opted and settled among Right-Wing users.

Influence operations emerging from the Russia-Ukraine war have largely settled into a relatively small number of narratives able to withstand shifts in the conflict and the public perception of either warring party. Since Russia’s most recent invasion of Ukraine, the rationalization offered by Putin that Ukraine and Russia are one people and that Russia has a responsibility to ‘Denazify’ its neighbor to the southeast has doggedly persisted online.

This network graph uses the Blackbird.AI's Narrative & Risk Intelligence Platform to visualize a geopolitical conflict unfolding online.

As ever, pro-Russian narratives continued to seep into a wide variety of online communities, connected to the Russian cause by careful curation and subsequent exploitation of common ideological threads, such as isolationism, populism, anti-imperialism, anti-fascism, and traditionalism.

Among the most manipulated online events of the year in the war was Prigozhin’s coup and subsequent death. The RAV3N team detected a variety of narratives that emerged among pro-Russian communities, eventually diffusing outward into both ends of the political spectrum and co-opting conspiracy-minded online users into shifting blame away from Russia. This strategy works with the Kremlin’s radio silence to point the finger elsewhere - US Space Force lasers, Prigozhin’s pilot, a cohort of Western nations, or even simple bad luck.

Since the onset of the Ukrainian counteroffensive earlier this year, pro-Russian accounts on popular social media networks, message boards, and forums have been pushing hard that Ukraine is losing the war and faces an imminent defeat that will reshape the region and the global power structure. 

The most common method of inauthentic propagation noted by Rav3n analysts was targeted bot-like amplification, whereby pro-Russian influencers received a deluge of engagements from bots. This technique crowds out authentic conversation in favor of manipulative, often harmful posts.

A common tactic amid the conflict is leveraging generative AI to fabricate misleading images. The RAV3N team noted this everywhere, from popular clearnet social media sites to deep web forums. This method is particularly effective as detection methods continue to lag behind generative AI technology, and a lack of official or open-source information emerging from the conflict obscures truth from fabrication.

The above network graph shows user interactions on the Wagner Group’s influence in Africa. General users are shown in yellow, pro-Ukraine users in blue, right-wing users in green, pro-Russian users in orange, and left-wing users in pink. The graph accurately reflects that pro-Russian and pro-Ukrainian sources are central in most conversations. Still, pro-Russian activists leveraging right-wing communities is an especially effective method of dominating conversation and engagement online.

The alignment of CCP and Russian information operations targeting the West has increased, with pro-CCP accounts frequently amplifying pro-Russian messaging seeking to portray Western entities negatively. However, large-scale narrative attacks perpetrated by CCP State Supporters and designed to demonize or discredit the West have had limited impact overall as their inauthentic manipulation tactics often include low-quality automation that lacks the characteristics of organic authenticity.

In October, the Canadian government accused a network of CCP state-affiliated threat actors of running a disinformation campaign targeting its political leaders simultaneously on social media platforms. From August to September, this influence operation aimed to undermine the political and ethical reputations of Prime Minister Justin Trudeau, Opposition Leader Pierre Poilievre, and numerous other politicians by circulating unfavorable allegations about them in English and French. Using Blackbird.AI’s Constellation Narrative Intelligence Platform, the RAV3N team analyzed all negative conversations targeting Canadian politicians during this specific timeframe and identified a network of low-engagement users who caused two spikes in post volume and were associated with moderate to high levels of toxic content unusual content propagation patterns as well as negative sentiment.

RAV3N analysts also found and isolated from the rest of the conversations negative messaging surrounding the invitation of a Nazi veteran to the Canadian Parliament on September 25. Constellation’s signals indicated that this specific narrative benefited from organic and inauthentic pro-Russian amplification, which completely swallowed up the alleged Chinese-state-affiliated campaign targeting Canadian politicians regarding post volume and engagement. This shows that pro-Russian malign interference benefits from a more powerful and visible ecosystem of influencers. 

Interactions between users, hashtags, and URLs on the invitation of a Nazi veteran to the Canadian parliament and on the allegations against Canadian politicians, shown with blue and yellow nodes, respectively, on Blackbird.AI’s Constellation Platform.

2023 was marked by many high-engagement deep fakes circulating online and favored by the rapid development of Generative AI. Whereas traditional deep fakes were often limited in scope, lacking in technology, difficult to access, and not widely available to the average audience, Generative AI now offers new opportunities to its users by generating instant, affordable, accessible, and realistic content. 

Misinformation, disinformation, and malinformation have already disrupted political events using old-fashioned tools that undermined confidence in the last US elections, sowing division in the American electorate and propagating conspiracy theories. Generative AI has proved even more harmful in the current political landscape, as demonstrated by numerous deep fake videos impersonating candidates in the 2024 presidential election or even showing an explosion of the Pentagon that went viral online in May.

Similarly, scammers have increasingly used Generative AI voice cloning technology to create fabricated voices of loved ones and ask their relatives and friends for money. AI voice cloning has also shown that it can infringe on individuals' right to privacy, particularly in protecting personal data, as cybercriminals often obtain this information from public social media profiles of video-based platforms.

This technology has also been used to defraud businesses and banks, with scammers trying to trick employees into participating in recorded phone and video calls using voice-cloning technology to impersonate high-level executives.

The key takeaway on using generative AI to drive online narrative attacks is a greater need to contend with the sheer volume of plausible campaigns, underscoring the need for high-fidelity measurement of manipulation signals.

This year is the worst year on record for billion-dollar climate change events. However, conversations have emerged denying climate devastation as a construct of climate change, spurring conspiracies and misleading narratives. 

A common motif that emerged across climate change narratives identified by the Blackbird team is users posing alternative explanations than that of climate change. 

On August 8th, wildfires broke out across Hawaii's island, Maui, and devastated the island's land and infrastructure. Narratives swirled that the government purposely started the fires as a greater conspiracy of a land grab to make a smart city, with claims that lasers started the fires. Similarly, Canada saw an increased intensity of wildfires in June, prompting smog to disperse across the Northeast of the United States. Conspiracists claimed these fires were not an effect of climate change but rather were started by arsonists and laser beams from space. 

COP 28 and various other climate-related events this year spurred narratives of climate action as a means to enact government control. Narratives asserted that climate regulations are steps to place the world in lockdowns rather than to regulate environmental risks.

These narratives continue to surround climate events, putting organizations partaking in climate action at risk of harmful narratives.       

Activists and influencers from diverse ideological backgrounds pushed misleading stories online amid Silicon Valley Bank's (SVB) collapse. The RAV3N team analyzed all related conversations on the Constellation platform, revealing distinct narratives, including their own set of protagonists and conspiracies. One narrative suggested a “bank run” supporting Donald Trump following the SVB’s downfall, while another focused on opposing a taxpayer bailout. Further, claims circulated about SVB executives - including its CEO - allegedly selling personal stock ahead of the collapse and thus privileging financial benefits instead of protecting customers. Other influencers propagated a narrative linking the failure of Signature Bank to ties with the Trump family. Lastly, other users stirred controversy by claiming SVB's bankruptcy resulted from substantial donations to "woke" causes, while conspiracy theorists connected the collapse to Jeffrey Epstein. In this case, the Constellation Narrative Intelligence Platform by Blackbird.AI shed light on how misleading narratives can rapidly spread and pose a significant reputational threat to banking organizations, underscoring the need to monitor these discussions and actively adopt preemptive narrative strategies. 

Interactions between users, hashtags, and URLs on a conspiratorial narrative, with dark red nodes indicating entities affiliated to the Right-Wing cohort - revealing a single user's influence on a specific community, as shown by Blackbird.AI’s Constellation Platform

As 2023 draws to a close, it is evident that digital narratives drive reality. The year's events, from the deepened complexities of global conflicts to the vibrant pulse of pop culture, have highlighted the critical need for vigilance and discernment in an era where the sophisticated machinery of generative AI and narrative manipulation often distorts reality. The intertwining of technology, politics, and social dynamics has reshaped our understanding of global events and underscored the urgency for innovative solutions to safeguard information integrity. As we step into the future, the lessons of 2023 will undoubtedly serve as a beacon, guiding efforts to navigate and demystify the increasingly intricate web of digital disinformation.

To learn more about how Blackbird.AI can help you in these situations, contact us here.

Blackbird.AI’s RAV3N Narrative Intelligence and Research Team
BLACKBIRD.AI protects organizations from narrative attacks created by misinformation and disinformation that cause financial and reputational harm. Powered by our AI-driven proprietary technology, including the Constellation narrative intelligence platform, RAV3N Risk LMM, Narrative Feed, and our RAV3N Narrative Intelligence and Research Team, Blackbird.AI provides a disruptive shift in how organizations can protect themselves from what the World Economic Forum called the #1 global risk in 2024.