Machine language is controlling what we see and how we think, and as they are getting smarter, we are spiraling into a vortex of the dark. People are formulating opinions based on false representations, spewed out by automated sources that know what they want to hear and create the false reality to appease them.

The term ‘fake news’ has been weaponized to be any news source that doesn’t comply with a particular ideology. There are now individuals creating hyper-biased, propaganda, websites and these are being linked to other fake news sites, with YouTube being at the center of the source. What is being seen is that many of these sites are catapulting to a new breed of websites that line up with a belief system, and this goes well into complete propaganda.

In a 2017 Times article they report that Isis has used YouTube to entice new recruits:
“Islamic State has flooded YouTube with hundreds of violent recruitment videos since the terrorist attack in London last week in an apparent attempt to capitalise on the tragedy, The Times can reveal.

Google, the owner of YouTube, has failed to block the films, despite dozens being posted under obvious usernames such as “Islamic Caliphate” or “IS Agent”. Many are produced by the media wing of Isis and show, in high definition, beheadings and other extreme violence, including by children.”

It should be noted that at a later date, YouTube began to redirect some of these videos to those that are anti-terrorist.

The internet has opened the door to information hell and thousands around the world are jumping on the bandwagon to manipulate your mind. You might consider yourself to be a critical thinker and disregard the outrageous and bizarre, but what about those people that have mental disorders. They delve into the hideous world of misinformation that makes any sci-fi flick look tame. To them, this is all real.

What is scary about this is that we now add bots to the mixture and this amplifies the messages to become surprising coordination of fake news and other propaganda types. The goal is to glean a target audience. Many of the social media accounts, specifically Twitter and Facebook, have thousands of dangerously polluted algorithms. The infrastructure now sees hundreds of these on a daily basis, and they tap into the emotional aspects, and when you visit these sites, there are probably four to five organizations that are tracking.

The keepers have designed AI with smart algorithms, and they are simply turning them loose onto society.

YouTube Attracts the Kids

YouTube was launched in 2005, but when it was purchased by the tech giant Google, it took on a whole new face. The engineers that were hired created an algorithm to increase profits based on keeping the users glued to videos, and recommend additional videos before the current one was even done. The goal isn’t to offer relevant videos, just to keep you watching non-stop. The business model worked, because the current viewership is around two billion every day, and a majority of them are young.

A Pew Study in 2018 found that 85% of the teens in the United States use YouTube, making this the most popular under-20 online platform. The concept might sound harmless enough when you are watching the antics of puppies, but it takes on an entirely bad taste when the topics are about extremist or political content. YouTube’s design is to place people in filter bubbles of their design and not offer any way to escape. There is a kind of dopamine reinforcement that attracts and keeps the attention of kids, all the while, random accounts are making a profit. Additional elements to attract kids include alluring music that becomes emblazoned into the brains of anyone that listens to them. Appealing to multiple senses is one of the sure ways to keep the viewers on the video; and they play them over and over again.

Every generation has some form of red flag alert behavior warning for the younger generation. We look back and laugh at the “dangers” of Elvis Presley swiveling his hips or the long hair of the Beatles. Today’s kids and adults are watching YouTube videos that include the type of ultra-violence that was portrayed in the movie A Clockwork Orange. YouTube offers a platform for anyone to film and upload, and it covers the gamut of everything from eating laundry pods, drinking urine, all the way to suicide. For the youngest children, some of these YouTube videos are generating absolute fear and traumatization in the little ones.

An article in Wired UK: Children’s YouTube is still churning out blood, suicide, and cannibalism

“WIRED found videos containing violence against child characters, age-inappropriate sexualisation, Paw Patrol characters attempting suicide and Peppa Pig being tricked into eating bacon. These were discovered by following recommendations in YouTube’s sidebar or simply allowing children’s videos to autoplay, starting with legitimate content.

‘Recommendations are designed to optimize watch time, there is no reason that it shows content that is actually good for kids. It might sometimes, but if it does it is coincidence,’ says former YouTube engineer Guillaume Chaslot, who founded AlgoTransparency, a project that aims to highlight and explain the impact of algorithms in determining what we see online. “Working at YouTube on recommendations, I felt I was the bad guy in Pinocchio: showing kids a colourful and fun world, but actually turning them into donkeys to maximise revenue.”

AI Algorithms Attract and Encourage Radicalism

In a New York Times article, Zeynep Tufecki, a sociologist watches as many of the YouTube videos of Donald Trump then recommended videos “that featured white supremacist rants, Holocaust denials and other disturbing content.” YouTube becomes directly responsible for creating the kind of deviant rabbit hole that some might turn away from, but others find that they are strangely drawn to watch.

Research group Data&Society released a scathing report

“Alternative Influence: Broadcasting the Reactionary Right on YouTube by Researcher Rebecca Lewis presents data from approximately 65 political influencers across 81 channels to identify the “Alternative Influence Network (AIN)”; an alternative media system that adopts the techniques of brand influencers to build audiences and “sell” them political ideology.

Alternative Influence offers insights into the connection between influence, amplification, monetization, and radicalization at a time when platform companies struggle to handle policies and standards for extremist influencers. The network of scholars, media pundits, and internet celebrities that Lewis identifies leverages YouTube to promote a range of political positions, from mainstream versions of libertarianism and conservatism, all the way to overt white nationalism.”

“Three key quotes from Alternative Influence:

  • ‘Increasingly, understanding the circulation of extremist political content does not just involve fringe communities and anonymous actors. Instead, it requires us to scrutinize polished, well-lit microcelebrities and the captivating videos that are easily available on the pages of the internet’s most popular video platform.’
  • ‘By connecting to and interacting with one another through YouTube videos, influencers with mainstream audiences lend their credibility to openly white nationalist and other extremist content creators.’
  • ‘YouTube monetizes influence for everyone, regardless of how harmful their belief systems are. The platform, and its parent company, have allowed racist, misogynist, and harassing content to remain online – and in many cases, to generate advertising revenue – as long as it does not explicitly include slurs. YouTube also profits directly from features like Super Chat which often incentivizes ‘shocking’ content.’

Free Speech vs. Societal Harm

The landscape of freedom that is experienced on the net has a downside for those that suffer from mental illness. The danger with serious mental health conditions is that they cover a wide span of symptoms, and the temptation of YouTube can take them down a spiraling vortex of no return.

The one case that brought this front and center was the 26-year-old man that killed his brother because he heard voices telling him that he was a lizard. Buckley Wolfe exhibited early signs of mental illness, but it was his experiences with YouTube videos that made him travel deeper into his sickness. The YouTube algorithm consistently offered Wolfe additional videos until he finally embraced radical white extremist groups such as Qanon and Proud Boys. The delusional fantasies that permeated eventually solidified when he thought his brother was a lizard and asked him to kill him. In the situation of Buckley Wolfe, the YouTube algorithm was doing precisely what it was programmed to do, and the results were catastrophic.

The algorithm was designed for YouTube as an AI tool and never took human emotion, reactions, or mental conditions into consideration.

Using AI, the organizations that are involved with YouTube in this massive brainwashing scans the news and creates false reports, and in many cases videos that incite violence.

Parent company Google has been called to task on the YouTube radicalization, and while they have stepped up their TOS (terms of service) policies, i is only enough to ensure their own organization. If a video receives complaints, they will review and remove it, but this relies solely on users to register a complaint. In another step, YouTube has indicated that they will be removing automated control, allowing only humans to make the decisions regarding accountability and acceptability. This alternative is also not an answer, as it will involve people that are not qualified to make these types of decisions.

Worst-case-scenarios of those watching YouTube videos may involve people with mental health difficulties, but the problem extends far beyond that. There was such a volume of YouTube AI generated fake news that the fallout resulted in influencing the 2016 Presidential election, the Brexit vote, and presenting false political voices.

In his article,

FakeTube: AI-Generated News on YouTube

Jonathan Albright includes:
“One of the results of my recent post-election “news ecosystem” effort was the presence of YouTube. Beyond YouTube’s role in hosting videos through embeds on political websites, after reading a piece on A.I. and YouTube by Guillaume Chaslot, I thought I’d look more into why it increasingly feels YouTube, like Google, is being harnessed as a powerful agenda booster for certain political voices.”

Using the latest version of fake news called “A Tease”, Jonathan located 6,902 AI-generated YouTube videos that portrayed themselves as “news.”

“Curated, Republished, and AI-Narrated – Each “T” AI-generated video consists of a progressive “slideshow” of still images related to the title of the video, which originate from various websites, WordPress blogs, and content delivery networks across the internet. It’s notable that the last frame of these “Tease” videos seem to have a “copyright” disclaimer.”

“A computerized voice ‘reads’ out text from published news articles and other news-related content as the slideshow progresses. The narration is surprisingly accurate and coherent, except for the brief pauses during the transitions between the sources from which the video derives its ‘story.’”

In other words, we are looking at the next evolution of AI generated, so-called “news” videos that have escalated the use of the YouTube algorithm so that……literally…..no one is in charge of what any of us see or even know if it is the truth. Albright’s article is continuing to tally the videos and has made an approximation at around 80,000 so far.

The time has come to drop the curtain on AI-controlled mind influence. Just as YouTube designed an algorithm to advance profitability for videos and viewing, there is a requirement to have an intelligent AI that can cut through the lies, misinformation, and misrepresentation. As a society we need to transition from looking at technology as a “solution” to our problems, and instead begin using it as a guide.