Navigating the Warped Realities of Generative AI

Deep fakes - a combination of “deep learning” and “fake” - refer to manipulated media generated using artificial intelligence.

Posted by Sarah Boutboul on April 18, 2023

Deep fakes – a combination of “deep learning” and “fake” – refer to manipulated media generated using artificial intelligence. This term first appeared in November 2017 when a Reddit user introduced r/deepfakes, a subreddit specifically dedicated to creating and spreading AI-edited videos inserting female celebrity faces into pornographic content. Users relied on an algorithm that combined the facial features of one person with the body of another, creating a realistic-looking video. According to a study conducted by Deep Trace Labs, in 2019, “non-consensual deep fake pornography” accounted for 96% of the total deep fake content available online, and specifically aimed at damaging the reputation and credibility of women.

Although the subreddit was removed in February 2019, deep fake technology has continued to evolve and is increasingly used in political contexts. For example, deep fake videos of prominent activists and politicians supporting Hong Kong protesters circulated widely on social media to discredit the movement, while another footage of House Speaker Nancy Pelosi with her speech slowed down to make her look intoxicated was retweeted by former President Donald Trump.

THE NEXT EVOLUTION OF DEEP FAKES

The danger posed by deep fakes has increased considerably with the development of Generative AI . This powerful development uses machine learning algorithms to autonomously create new data, such as images, videos and text. To do this, it uses machine learning models consuming large sets of data and ultimately producing new content based on the knowledge acquired from studying this information. These models are trained to recognize patterns in the data they consume before applying them to new content that looks realistic but is actually completely fabricated.

“Traditional” deep fake imagery was often limited in scope, lacking in technology, difficult to access and not widely available to the average audience. In comparison, Generative AI offers endless possibilities to its users by generating instant, affordable, accessible, and realistic visual content. As the technology becomes increasingly accurate, images have the ability to create their own context, which is a significant asset for malign actors. From this perspective, the tool is the next evolution of the traditional deep fake imagery threat.

Online communities have been a driving force in the spread of deep fakes since it first took hold in the digital ecosystem, serving as an entry point and collaborative space for interested users. These communities can now leverage Generative AI to create and distribute convincing fake images and quickly disseminate them on social media platforms, potentially causing significant harm to individuals and organizations.

As the technology continues to evolve, the number of individuals using it to create malicious content will most likely increase. Indeed, Generative AI imagery is not yet subject to strict regulatory guidelines such as a label explicitly highlighting its synthetic nature, while users are not always aware of the latest developments in AI capabilities, making it easier to deceive. ChatGPT can also offer significant help in writing prompts by generating text that can guide Generative AI to create accurate and realistic imagery, thus more sophisticated and harder to detect. In short, Generative AI’s mainstream accessibility amplifies the once theoretical deepfake threat, enabling rapid, low cost, easy to use content creation tools without traditional attribution trails – a significant challenge to digital security and trust.

NO MORE UNCANNY VALLEY

The “uncanny valley” is a phenomenon that describes the discomfort people feel when confronted with artificial representations, such as robots or computer-generated images, that closely resemble humans but have slight imperfections. This eerie sensation occurs when the representation is almost human, but not quite, making it unsettling. The biggest danger posed by Midjourney 5, one of the most prominent AI-generated digital models, is the difficulty of accurately distinguishing Generative AI-created content from human-made content without proper verification. According to a Syzygy Group survey of public perception of Generative AI in Germany, 94% of Germans believe that AI-generated images are so sophisticated that it is difficult to distinguish them from human content. Similarly, only 8% of Germans are able to identify a photo of a real person among AI-generated images of fake individuals. This concerning trend shows the influence of Generative AI on our perception of available information, as users may not be able to discern fake content from manipulated content.  

Efforts are underway to create parallel technology that can reliably recognize Generative AI content, but such initiatives lag behind the speed of the tool’s development. Writer is a free service that shows the percentage of human-generated content in a text. Image forensics can also analyze an image’s metadata to detect signs of manipulation, while training machine learning algorithms on a large Generative AI dataset can be used to detect new related imagery. However, no technology is currently able to reliably recognize all instances of Generative AI content.

Difficulties in differentiating authentic from Generative AI-generated images could pose a significant challenge for the upcoming 2024 presidential elections, as the technology is likely to be deployed as a political tool to spread digital disinformation. Deep fake imagery that is difficult to debunk could be exploited to drive a narrative and deceive voters, while simultaneously creating an opportunity for conspiratorial theories to take root. Malicious actors might allege that credible media content is deep fake – now that the line between genuine and fabricated is blurred – which could erode trust towards all types of media, already at a historically low level. In a similar vein, AI-generated content could be used to deceive journalists, leading to the spread of fake news and increased reputational harm. In addition, deep fakes created for satirical purposes may be exploited to mislead users. For example, a Generative AI deep fake video created for entertainment purposes could be taken out of context and disseminated as real information, leading to confusion, misinterpretation, and increased polarization in society.

FROM POLARIZING POLITICS TO ALTERNATIVE HISTORY AND FABRICATED NATURAL DISASTERS

Online factions are in competition to concoct the most outlandish and high-risk scenarios using advanced technologies, pushing the boundaries of AI-generated content and reshaping our perception of reality. There are many examples of fake Generative AI images created for different use cases, as seen on platforms such as Reddit and its online communities.

Some of the most viral fake Generative AI imagery depict the arrest of former US President Donald Trump, Melania Trump and Stormy Daniels laughing together, or Trump playing guitar in prison in the context of the much-anticipated arraignment. 

A Midjourney image of former US President Donald Trump playing guitar in prison
(By u/Pashini90 on r/midjourney).

Other fake images show the US President Biden’s administration partying at the White House, Trump on vacation in China or even Biden and Russian President Vladimir Putin shaking hands.

A Midjourney image of US President Joe Biden and Russian President Vladimir Putin shaking hands(By u/wtfmanuuu on r/midjourney)

The popularity of Generative AI imagery has spread worldwide, as evidenced by the viral photorealistic deep fake of French president Emmanuel Macron’s arrest amidongoing protests in the country. While these fake images are often created for satirical purposes, these can fuel public panic and create confusion or distrust.

A Midjourney image of French President Emmanuel Macron getting arrested
(By @HologramBlues on Twitter)

The creation of fake Generative AI imagery is not limited to the political sphere. Other examples include content depicting UFOs and aliens in a variety of contexts: on Mars, meeting the Pope, or even President Joe Biden appearing as an alien himself. The latter iteration, which features manipulated video and voice technologies, could be used to spread false information about the President and undermine public confidence in the government.

An AI generated image of Pope Francis wearing a puffy jacket
(by Pablo Xavier)

A Midjourney image of the Pope looking at an UFO flying over the Vatican.
(By u/charismactivist on r/midjourney)

In addition to examples of Generative AI deep fakes images of political figures, reimagined historical events, or UFOs, imagery of fake earthquakes or tsunamis has garnered significant attention. These hyper-realistic deep fakes focusing on natural hazards could lead to distrust and skepticism about the veracity of natural disasters, hampering efforts to address them as they occur.

Midjourney images displaying historic human landing on the moon as staged
(By u/FineWithIX on r/midjourney)

A Midjourney image of destroyed infrastructures as a result of an earthquake in the US and Canadian Pacific Coast
(By u/Arctic_Chilean on r/midjourney subreddit)

A Midjourney image of disaster relief personnel and destroyed infrastructures as a result of an
earthquake in the US and Canadian Pacific Coast
(By u/Arctic_Chilean on r/midjourney subreddit)

To showcase how easy it is to create deep fake imagery by combining the capabilities of interconnected and readily available AI technologies, Blackbird created its own narrative around a natural disaster that never happened. Our team first asked ChatGPT to provide a detailed summary of an oil spill scenario, to then feed the popular Generative AI image creator MidJourney. The prompt, “Devastated news anchor reporting live from Hawaii oil spill, massive slick of black oil spread across the sea, dead animals, real life photography” generated four incredibly detailed and realistic images.

Midjourney images displaying journalists reporting on an oil spill in Hawaii
(Generated by the Blackbird team)

The ease with which both the prompts and images can be created reinforces concerns about the misuse of the technology. In our example, the generated content could be used to disseminate false information about the occurrence of an oil spill in Hawaii, which could then create outrage, boycotts and more. Additionally, if these images were then later debunked, they would prevent the public from trusting similar situations received in the future. We have entered an information environment where Generative AI has enabled threat actors to effortlessly create countless variations of events, turning violence, war crimes, and disasters into media-rich, distorted narratives with nothing more than a well-crafted prompt.

LEGAL AND SECURITY IMPLICATIONS

The proliferation of Generative AI mechanisms raises a wide variety of legal and security concerns, from intellectual property to legal liability. This section provides brief overviews of the implications of Generative AI concerning defamation and gendered harm, copyright law, legal evidence and admissibility, criminally obscene imagery, corporate brand risk, and extremist propaganda.

Defamation & Libel
Generative AI may have significant implications for defamation law, raising urgent questions about who can or should be held liable for the creation and/or dissemination of defamatory content. Defamation occurs when a false statement of fact is communicated to a third party, causing harm to the reputation of the person or entity being defamed. With the rise of Generative AI, it is becoming increasingly easy to create and disseminate false and defamatory content like deep fakes and synthetic text. This creates significant risks for individuals and organizations seeking to protect their reputations.

For example, Brian Hood, the mayor of Hepburn Shire, Australia is considering a defamation lawsuit against OpenAI and its generative AI chatbot, ChatGPT, after the chatbot allegedly shared false claims about his criminal record. If brought to court, ChatGPT could become the first Generative AI product targeted by a defamation suit. Hood’s lawyers sent OpenAI a letter of concern, and the company has 28 days to fix the errors or face a defamation suit. The lawsuit could potentially apply defamation law to a new area of artificial intelligence and publication.

Copyright Infringement

Screenshots from “Zarya of the Dawn” a comic written by a human author but illustrated by Midjourney
(Comic by Kris Kashtanova)

The use of Generative AI to create images and videos raises significant copyright implications. Copyright law grants the owner of a copyrighted work exclusive rights to control the reproduction, distribution, and display of the work. When Generative AI is used to create an image or video, questions arise as to who holds the copyright in the resulting work. In some cases, the copyright might belong to the person who created the generative AI algorithm, while in others, the copyright may belong to the person who provided the input data to the algorithm.

Another copyright issue arises when Generative AI is used to create works that are substantially similar to existing works, raising the question of whether the new work is an infringement of the existing work. Courts have held that a work can infringe on a copyrighted work if it is substantially similar to the original work and if the defendant had access to the original work. This raises the question of whether a Generative AI algorithm that is trained on existing works can create new works that are substantially similar to those existing works and, if so, whether such works would be infringing.

In February 2023, the US Copyright Office announced that illustrations created by diffusion model Midjourney for a comic book (pictured above) were not protected by copyright law. While the author retained rights to the human-written text, the illustrations, all of which were AI-generated, received no intellectual property protections. The decision will strongly influence the way artists and companies approach the creative use of AI-generated images going forward.

Legal Evidence & Admissibility Implications
Attorneys Matthew Ferraro and Brent Gurney suggest that defendants in certain cases are increasingly likely to use deep fakes or other forms of AI-manipulated media to cast doubt on the reliability of video evidence in trials. As deep fakes become more realistic and harder to detect, the risk increases that falsified evidence finds its way into the legal record, causing an unjust result. The existence of convincing deep fakes also makes it more likely for a defendant to challenge the integrity of evidence, even without reasonable suspicion of inauthenticity. This phenomenon, the “liar’s dividend,” poses a serious threat to the judicial process.

As the visual products of Generative AI become increasingly indistinguishable from real images, the legal system will have to rely on “expert” witnesses to make judgements on the authenticity of photo- and videographic evidence in the absence of robust detection technology. According to University of Waterloo professor Maura Grossman, “neither a judge nor a jury will be in any position to believe their eyes and ears anymore when they look at evidence.” Grossman also notes the potential for inequitable treatment as “Only defendants who can afford to pay for expert analysis of the evidence will be able to get a fair trial when trying to refute deep fakes.”

Corporate Brand Risks.

An AI-generated image of a nonexistent Coca-Cola product, designed to imagine how the product
could be designed if the Coca-Cola Company were started in Japan.  
(By u/mrgalexey in r/midjourney)

As brand reputation and public image is paramount to success, deep fakes and Generative AI pose significant risks to companies. These technologies allow for the creation of highly convincing videos, images, and audio recordings that can be used to manipulate or deceive consumers. This opens up a range of possibilities for bad actors to damage a brand’s reputation, by disseminating false information or creating content that portrays a company in a negative light.

One of the main concerns is that deep fakes and Generative AI can be used to create fake endorsements or testimonials, which can be used to deceive customers and damage a brand’s reputation. Additionally, these technologies can be used to create fake news or malicious content that can harm a company’s image. As such, companies need to be vigilant about monitoring their brand reputation and taking swift action to address any issues that arise. This may involve working with legal experts, developing crisis communication plans, and investing in technologies that can help detect and mitigate the impact of deep fakes and Generative AI on their brand.

One risk is the creation of deep fake images that falsely attribute a brand to a product or service that the brand does not actually endorse. For example, a deep fake image could be created to show a celebrity using a product that the brand does not actually manufacture or endorse. Such images can cause significant harm to a brand’s reputation by associating the brand with products or services that are of poor quality or are manufactured unethically.

An AI-generated mock-up of a nonexistent Bentley pick-up truck
(From imgur)

Brands are also at risk of losing control of their product image. One popular thread in Reddit’s Midjourney forum showcases the results of one user’s attempt to create internationally-inspired Coca-Cola product design. Another user created a gallery of McDonald’s burgers surrounded by dirty or repulsive backgrounds. Yet another user created a gallery of mockups of hypothetical vehicles from a variety of automobile companies.

Extremism Propaganda & Imagery
As the use of Generative AI continues to expand, experts are increasingly warning of the risks posed by its use by extremist groups. With the ability to create highly realistic images, videos, and audio recordings, Generative AI has the potential to be a powerful tool for propaganda creation and distribution and narrative manipulation.

Extremist ideologies require new membership to survive. Generative AI allows extremist propagandists to produce a wider range of materials that are more sophisticated, persuasive, and targeted than ever before, at scale. By analyzing social media data, Generative AI algorithms will be capable of creating visual narratives that are tailored to the interests and preferences of vulnerable users that extremist groups may be interested in recruiting. Materials generated may also serve the purpose of solidifying and/or rallying an existing base. On Reddit, one user in the r/Midjourney forum generated a painting-like image of Hillary Clinton as a vampire holding a small child, a play on a QAnon conspiracy theory that claims the Clintons and other elite families drink the blood of children. While the user is likely not a member of an extremist group, such imagery has spurred violent incidents in the past. The user joked, “Making a start on my QAnon comic book.”

Painting-like AI-generated image of Hillary Clinton holding a crying child as a vampire,
referencing popular QAnon conspiracy theories
( By u/asjarra in r/midjourney)

Extremists are also likely to exploit Generative AI to create deep fake videos containing content that may be damaging to social and political opponents.

MITIGATING WARPED REALITY RISKS

We are witnessing rapid advanced in Generative AI and AGI development at a breakneck speed with little thought for the implications on society. Social media platforms have been around for over a decade, but have failed to moderate human speed discourse and the related harms. Now, with the readily available technology that can generate unlimited narratives and media, we risk warping reality in previously unimaginable ways.

Generative AI is posing unprecedented societal and national security risks from a cybersecurity perspective. Threat actors now have an incredibly powerful tool in their toolkit, enabling everything from disinformation generation to malicious code creation. Furthermore, the popularity of these tools across all modes of work is resulting in massive exposure to enterprise strategy, intellectual property, and other forms of confidential data being fed into large language models with unknown or ever-changing privacy policies.

If we continue to overlook the influence of AI programming on decision-making processes or the risk of centralized technologies compromising personal data privacy, the unchecked use of Generative AI tools has the potential to dramatically alter our perception of reality. It’s crucial that we evaluate multiple scenarios and weigh the cost versus benefits of AI disruption to avoid a distorted reality.

Most notably, the difficulty of distinguishing between content created by Generative AI and human-made products, as well as the ability of deep fakes to reinforce erroneous belief systems are a major concern. This specific issue has the potential to compromise the integrity of elections, fuel false narratives, and contribute to an increased lack of trust in the media, public discourse and ultimately democracy.

Threat mitigation measures include, but are not limited to, careful vigilance in sourcing and identifying deep fake images, strong cross-industry collaborations, and investments in research to better monitor and exploit these new tools. While current technology may not be able to systematically detect Generative AI images, companies must empower themselves with technologies that can monitor and analyze the spread of specific campaigns using AI generated content to mitigate the impact of disinformation. By tracking how far and fast narratives containing manipulated images spread, stakeholders can contribute to effectively tackle this new generation of a growing new class of threats and successfully stay ahead of fast evolving tradecraft designed to shift human perception.

About Blackbird.AI

BLACKBIRD.AI protects organizations from narrative attacks created by misinformation and disinformation that cause financial and reputational harm. Powered by our AI-driven proprietary technology, including the Constellation narrative intelligence platform, RAV3N Risk LMM, Narrative Feed, and our RAV3N Narrative Intelligence and Research Team, Blackbird.AI provides a disruptive shift in how organizations can protect themselves from what the World Economic Forum called the #1 global risk in 2024.

Need help protecting your organization?

Book a demo today to learn more about Blackbird.AI.