Is the “CDiO”, the next hot position in the cybersecurity sector?

Misinfo-Ops: Next Generation Cybersecurity

As we see the continued spread of misinformation, it becomes blatantly apparent that the landscape is spreading from social media and questionable websites to every aspect of our lives on a global perspective. When false information encroaches on areas that involve decision making, elections, and advancement or fail of businesses, it becomes a cybersecurity issue. If we cannot depend upon the data that we receive, it will only be a short time before we witness the fall of corporations and economies. Those that design the misinformation and disinformation campaigns are becoming a lot more sophisticated, and it requires even more powerful tools and experts to combat the problem.

We think of cybersecurity as the steps taken to stave off hacker attacks and system breaches, but in the last number of years, cybercriminals have not only been incredibly successful, but they have developed what was once ragtag groups into very complex members that emulate the way technology companies create, test, and launch software. Cyber threats have escalated to such a degree that organizations have been instituting new protocols as defensive measures. The problem with this has been the slow moving and even slower acting actions. Instead of being proactive, a majority of companies simply “took the chance” of a system breach, and this has cost billions around the world.

Misinformation is heading down the same path of those that suffered under cyberattacks, and if this topic is handled in the same way, it may take years for those in the position of making the decisions to be proactive. Security companies including Symantec and Cisco have been raising red flags for years, punching out as much financial and reputational data as an alert. However, even with their intense “shouting,” corporations large and small have moved at a snail’s pace to try to thwart cyber attacks.

One of the most challenging areas to gauge is the financial cost of cyber attacks. Beyond stolen proprietary data, system outage, and loss of business, companies experience a loss of reputation. The additional factor in the equation is that not all organizations report a system breach. Over the years the driving reason for the lax attitude about cybersecurity was cost, and ignoring the potential of an attack was standard.

In a 2019 report, Symantec listed:

56% increase in web attacks

12% increase in enterprise ransomware (malware)

33% increase in mobile ransomware (malware)

25% increase in the number of attack groups using destructive malware

It took over twenty years for a majority of organizations to recognize the priority of cybersecurity and put technology and people in place to combat it.

The Psychology of Misinformation Campaigns

One of the critical aspects to recognize in the alignment between cybersecurity and misinformation is the change in methodology that was taken by cybercriminals. Once they realized that it took far too long to breach a system using standard hacking, they turned to the path-of-least-resistance: human error. The launch of ransomware is based not on sophisticated ways of getting into a network, but with a simple email that had an attachment.

Cybercriminals used the psychology of someone trusting email, opening it, and letting the Trojan virus take over entire systems. This is the same kind of “trust” that purveyors of misinformation are using to spread their campaigns, and not unlike cybercriminals, they also have an agenda. In the case of the lies and deceit, the purpose is to confuse, redirect, encourage hate, make accusations, and create a cause and effect that will change everything from a political election to a social belief system.

The lines between misinformation and cybersecurity have now been drawn so close that they are nearly one and the same.

We are observing a drastic expansion of news and information being viewed on our mobile devices, and yet the Symantec report indicates that 1 in every 36 mobile devices have a high-risk app installed. As we access websites for the items that we buy and information that we are seeking, the average individual is accessing sites that are designed to undermine.

The same Symantec report indicated that one in ten URLs that were analyzed were identified as being malicious. It becomes quite clear that the next generation of cybersecurity will have to encompass those that involve incorporating misinformation as part of its wing.

One of the areas of concern relates to the “incentives” involved in misinformation. These cover a variety of areas including changing a political outcome all the way to financial profit. Some of the original misinformation formulas included “click bait” articles and ads. Using an algorithm that is accessible on social media or digital marketing, the purveyors gleaned a list of those users that would be most likely to click on specific headlines. Each time they got a “click,” they earned money. This is the psychology of appealing to the belief system of the user, and once they begin to click, it generates additional ads and articles that reinforce their ideologies.

As the purveyors increased in their profits, they created more sophisticated methods of gaining the trust of the viewer including online polls that would appear on social media, within articles, and even blasted across your screen as you logged into your email. False narratives became the news that appealed based on the psychographic profile. More elaborate demographics could also be pulled to intensify the form and type of misinformation the individual was receiving.

In other words, people are fed lies based on their internet use, and the lies are affecting their opinions, choices, and belief systems.

It’s Time for Misinformation Ops

Just as organizations have been required to add cybersecurity professionals as part of their “cost of doing business,” there is now a requirement for technology companies, governments, and social media giants to ensure that the cyber attacks of misinformation are identified.

In many cases, cybersecurity companies are brought onboard to accomplish system analysis, testing, employee training, and recommendations for network changes. These companies are on to of all of the newest cyber threats and communicate with various IT departments to implement the required changes. This approach is one of the most cost-effective methods, as the ops team works closely to inform and assist in protecting an organization.

As we are evolving in our net age, we are finding that there are now professionals that have developed a set of tools that are critical in deciphering “truth from lies.” The technology incorporates a number of abilities including examination and analysis of articles, websites, blogs, memes, and even AI videos to determine viability and who has sourced them. When the results are complete, the information about the falsehood is identified and listed, creating an informed viewer.

The lessons learned in the years that it took to accommodate cybersecurity should be understood. In today’s age, we cannot afford to wait twenty years to institute the programs that recognize and alert the public to the fact that they are being manipulated. For those that know the high stakes, such as government institutions and larger corporations, it will be a requirement to have misinformation specialists on staff. The lines will now have blurred entirely as the new staff will become an integral part of the cybersecurity team.

We have entered an age when the technologies that we are designing must offer additional layers of identification and control so that we can help to ensure that we win the battle against misinformation. People have an innate desire to know the truth and are becoming outraged at the idea that others are manipulating them for profit and personal agendas.

This Is Why We Fight,
Blackbird.AI Team

We are fighting in the war against misinformation to create a more empowered critical thinking society.

To find out more about our team and the Blackbird.AI Mission, visit us at www.blackbird.ai

[gs-fb-comments]