Predictive Context

Not predictions, but rather future context.

Predictive context is like a background of obviousness. If you like baseball, even if you haven’t played, you have a background of obviousness in baseball. If you are a student of cybersecurity history, yet haven’t ever held a certification, you have a context for likely-most future events

While we are all trying to figure out how to avoid the next ransomware or open-source supply chain attacks, and whether embracing a zero trust strategy is a good idea, here is some context for the next 5 years.

Based on what we have witnessed so far this year, our research team says we will probably see a steady increase and higher stakes in general disruption, distortion and deterioration.

Disruption.

Disruption is a category that focuses on our over-dependence on fragile connectivity, densified network complexity, production systems dependent upon open-source software embedded up front and ad hoc micro-segmentation.

These will combine to increase the risk of premeditated internet outages targeting specific business operations and in particular, the Internet of Things.

We will see it in medical devices, critical infrastructure and automated industrial control systems of every stripe.

F35s are already more susceptible to a cyber-attack than to a kinetic missile attack.

Distortion.

Distortion is the spread of misinformation by bots and automated sources that causes compromise of trust in the integrity of information.

Banking and digital currency will become primary targets.

Whether by proxies of the Russians, Chinese or a hedge fund’s short the US Dollar campaign, we will continue to see an avalanche of misinformation designed to stoke already burning dumpsters of disharmony and division.

Combined with steady COVID-19 confusion, our social fabric will increase the speed at which it discards unifying icons and instead, seeks out tribal identities in which each faction can find solace.

Deterioration.

Deterioration is the result of our enterprise’s inability to control information.

Expanded attack surfaces, smart technologies, excessive identity trust, course-grained access management, hygiene failure, general apathy, ignorance or incompetence and pushed technology envelopes driven by competition to get to the 4th industrial revolution will create a target rich environment well beyond our ability to defend, respond, and remediate.

The conflicting demands posed by evolving national security will negatively impact an enterprise’s ability to control information, and leadership at the top of everything will be insufficient for proper guidance and direction.

Some Threat Vectors to Consider.

The cloud is not our savior and it definitely ain’t our friend.

Two years ago, in 2019, the Oracle and KPMG Cloud Threat Report claimed that cloud vulnerability is and will continue to be one of the biggest cybersecurity challenges faced by organizations into the future.

Data breach, misconfiguration, insecure interfaces and APIs, account hijacking, malicious insider threats, and DDoS attacks are among the top cloud security threats that will continue to haunt firms failing to invest in a robust cloud security strategy.

And which firms do that?

Right.

Most enterprises still have no idea what information assets reside on their networks, what that information is worth or where it is stored. Moving that data to the cloud as part of an infrastructure migration strategy, substantially increases the odds of sensitive data related to their employees and business operations being exposed to cyber-criminals.

The shared responsibility model is part of the bargain for very good legal reasons – it is an announcement that your lack of systems security is your responsibility.

Like Willie Sutton once said about banks and money, the cloud will be where all the data lies and given the way we approach cloud security, will continue to be serious breach bait for ransomware attacks.

AI and Machine Learning.

AI and machine learning have finally begun to weave their way into the fabric of almost all industries. AI’s impact on all manufacturing operations, the management of supply chains and now even security, is changing the levels of the playing field.

The speed which algorithms can unwrap workflow puzzles and perform formerly manual tasks is transforming the digital workplace, but those same characteristics are now being leveraged by bad guys to wreak havoc on defense and detection mechanisms.

And we are still at 4G.

Against that backdrop, AI fuzzing (AIF) and machine learning (ML) poisoning are teed up to become the next big cybersecurity threats, because together, they can identify vulnerabilities in applications and systems, in real-time.

Fuzzing involves inputting massive amounts of random data, called fuzz, into a target code set in an attempt to discover its crash points. Poisoning is corrupting the training data used by the ML machine.

AI enables instant testing with a large set of inputs, testing a system for its weaknesses and can then activate the most appropriate malware payload based on the discovered vulnerabilities.

In real time.

Stop and stare.

Why don’t our major software vendors do the same thing on our own code?

We do.

But we (Microsoft, Google) do it at home in our labs to discover and then publish our own vulnerabilities, while our adversaries do it out in the wild seeking immediate targets.

On the battlefield, the hierarchy of control systems is discarded almost immediately after the first shot is fired.

Smart Contract Hacking.

Though smart contracts are in their early stages of development, businesses are using them to execute some form of digital asset exchange or the other. In fact, it’s smart contracts that made Ethereum famous.

Smart contracts are software programs that carry self-executing code. Many businesses are using smart contracts in lieu of traditional forms; ones that enable developers to create the rules and processes that build a blockchain-based application right into the contract itself.

And, at the same time to no one’s surprise, these contracts have become a prime target of online criminals looking for gold.

It is also a classic example of a new technology rushed to market ahead of maturity, safety and reliability testing. As our technologists are just about getting to know how to design them and security researchers are still finding bugs in some of them, bad guys are following along with bread crumbs and a flashlight.

Not unlike most new technologies whose architecture renders them impervious to hacking, and continues to popularize through hopeful adoption, we will see block-chain native holes exposed that threat actors will exploit creating significant confidence issues around this form of substitution for paper contracts.

Lawyers are a paranoid group for good reason: their liability is off the charts.

Social Engineering Attacks.

Today, most organizations are boosting their email security with software and tools to block phishing attacks, cybercriminals have completely outwitted those designs by inventing sophisticated phishing kits that facilitate data breaches and financial fraud.

Historically, an effective, high-reward, low-investment strategy for cybercriminals to gain legitimate access to credentials, it remains the number one cause of data breaches globally (Verizon Threat Report 2020).

The key is in the opportunity gap we provide adversaries when we begin to migrate from email to mobile-platform based messaging and so easily acquiesce with marketing programs that paint the future in such a compelling way.

With the resulting and sudden global rise in popularity of communication apps, smishing (SMS phishing) has charged the stage and is now dominating messaging platforms and encouraging users to switch to apps like WhatsApp, Slack, Skype, WeChat, and Signal, so that they can trick those same users into downloading malware on their phones.

We will see a huge uptick in successful smishing attacks as cyber-criminals seek to close that gap.

Deepfake.

First coined by Reddit users in 2017, ‘deepfake’ is a fake video or audio recording that cybercriminals use to convince an audience that what they are watching or listening to is the real thing.

This AI-based technology has made steady and alarming progress as algorithms become more adept at consuming, understanding and assembling extremely granular data on a daily basis.

As the technology continues to mature, cybercriminals are now using it in earnest as an effective way to foster disruption and make millions across a whole variety of industry segments, currently limited to rich targets in financial markets, media, entertainment, and consumer goods.

But soon, new use cases will abound, spilling over into international relations and politics

In the business context, these AI-generated fake videos and/or audios will very soon be used to impersonate CEOs with zero-detection, and through manipulation or outright appropriation, position themselves to steal hundreds of millions from enterprises, disseminate doctored information about the principals and the businesses, interrupt business operations through transparent and completely legal stock market manipulation at the moment of trade and impact broad, global market sectors.

Who will manage and remediate this attack vector?

The UN?

Who will detect it?

What if NotPetya re-emerged today without a kill switch?

What if it was invisible?

Born Anywhere.

The current puzzle seems far too complex now for anyone to understand, let alone plan to counter or have any answers that can impact an outcome.

Yet, we still have hope. Maybe our researchers are wrong.

Maybe the meaning of “Born in the U.S.A.” is the distance between the grim verses and the joyous chorus. It’s the space between frustrating facts and fierce pride — the demand to push American reality a bit closer to our ideals.

Maybe we awake in time.

Read more: