Behind the TikTok Hearings: Geopolitics, Narrative Manipulation and the Power of Social Media

Lorem ipsum dolor sitoasa ja amet, consectetur adipiscing elit commod odolor convallis anamoret haouskn

TikTok has taken the world by storm, quickly becoming one of the most popular social media platforms globally. But with its rapid rise comes growing concerns in the United States about potential risks associated with the platform. So, why are U.S. lawmakers worried about TikTok, and are these fears justified?

TikTok's Unique Problem: Ownership by a Geopolitical Adversary

While all social media platforms wrestle with issues like misinformation, child safety, and data privacy, TikTok's unique challenge lies in its ownership. The platform is owned by ByteDance which is headquartered in China and has been reported on as being under the influence of the Chinese Communist Party (CCP). This connection has raised concerns about the potential for Chinese government influence on a platform widely used by Americans. 

A primary concern is that TikTok's parent company, ByteDance, could potentially share users' data with the Chinese government. The Chinese National Intelligence Law of 2017 requires Chinese companies to cooperate with the government's intelligence efforts, which could include sharing user data. This possibility has raised alarm bells in the U.S., as data from millions of American users could be at risk.

The Committee on Foreign Investment in the United States (CFIUS) has even investigated ByteDance's acquisition of the app (which later became TikTok) due to these concerns. Although no action was taken against ByteDance, the investigation underlined the U.S. government's wariness of TikTok's Chinese ownership.

In response to these concerns, TikTok has emphasized that its U.S. user data is stored in the United States, with a backup in Singapore. Additionally, the company has stated that its data centers are not subject to Chinese law. However, some experts argue that the Chinese government could still find ways to access this data, and the potential risk to national security remains a significant concern.

The Potential for CCP-Controlled Narratives on TikTok

If the CCP managed to exert control over TikTok, it could shape narratives on the platform to promote its interests and suppress dissent, particularly during critical geopolitical moments. With over 150 million users in the U.S., TikTok is a powerful tool for shaping public opinion. As we have seen very clearly with the Russian invasion of Ukraine, the nature of geopolitics and war have changed dramatically as a result of social media and online narratives. 

There have been several instances where TikTok has been accused of suppressing content critical of the CCP or promoting pro-China narratives:

  • In 2019, The Guardian reported that TikTok's internal moderation guidelines directed moderators to censor content that might harm "national honor" or criticize the Chinese government's policies. While TikTok later claimed these guidelines were outdated, the incident raised concerns about potential CCP influence on the platform.

These examples suggest that the CCP could potentially use its influence over TikTok to promote narratives that align with their interests and suppress dissenting voices on the platform.

By shaping public opinion, the Chinese government could undermine trust in American institutions, create social division, or even promote policies that benefit the CCP's strategic interests.

Are concerns about TikTok well founded? What can be done about it?

While concerns about TikTok's ties to the CCP are legitimate, they should not overshadow the fact that all social media platforms are vulnerable to misinformation and influence campaigns. The challenge is to develop robust strategies to mitigate these risks, regardless of a platform's ownership while balancing freedom of speech and user safety.

This includes promoting transparent algorithms, implementing third-party audits, and establishing more powerful content moderation guidelines. While content moderation has been debated hotly as of late, the runaway incentives of attention driven revenue at the cost of user safety has long been a driver of that that has favored engagement and reach over all else.  If online societal harms are truly a priority, governments should strengthen legislation to prevent foreign influence, build media literacy education programs that can help users better understand and evaluate the content they encounter to increase baseline public literacy on data governance.

Finally, collaboration is key. Social media platforms, governments, civil society organizations, and researchers must work together to share best practices, legislate to build better policies around data privacy protections, and develop innovative solutions to address narrative manipulation and misinformation. By adopting these strategies, stakeholders can create a more transparent and balanced social media environment, protecting the integrity and diversity of information in the digital age.

Balancing the Complexities of Online Discourse

While all these recommendations seem to be sound, the likelihood that these measures can be agreed upon and implemented are becoming increasingly less likely in the U.S. and around the world. In fact, we have been moving in the opposite direction. Platforms have begun to roll back access for research communities, decrease moderation around misinformation, or strike down moderation altogether in the name of freedom of expression. The very notion of banning a popular platform in the U.S. would have seemed unthinkable a few short years ago, with organizations like the ACLU strongly voicing that a ban on TikTok would violate the First Amendment.

But how can the First Amendment square with a platform that itself manipulates the content and flow of public conversation? Navigating the delicate balance between the First Amendment and content moderation on social media platforms is an ongoing challenge, particularly when the platforms themselves have the power to manipulate the flow of public discourse through recommendation feeds. Achieving algorithmic transparency is crucial in this context, as it allows for a more open and honest debate about the role of free speech and content moderation in the digital era, but the counterpoint here is that algorithmic transparency can lead to gaming of the algorithm by third parties. 

Ultimately, platforms that intentionally disrupt the dynamics of online conversations may experience short-term gains in profit and influence, but they are unlikely to sustain long-term growth with a diverse user base. If the majority of major social media platforms pursue a path of narrative manipulation driven by profit or reactive moderation, we will continue to witness a fragmentation of the online landscape into smaller, topic-focused communities, rather than the realization of a truly inclusive and open 'digital town square.'

While the future of TikTok in the U.S. remains uncertain, the conversation surrounding the platform has highlighted the broader challenges facing the digital information ecosystem, with unlikely allies both for and against legislation for a ban. As we continue to grapple with these issues, it's essential to remain vigilant and committed to finding solutions that preserve the integrity of our online spaces, regardless of the specific platforms involved.