AI: Rescuing Human Frailty from its own DNA

Data Breach Fever: The New Dark Ages?

As we have seen repeatedly throughout 2019 and over the last 10 years or so, a simple data breach can result in the loss of billions of dollars of assets, revenue, and shareholder value and intense reputational damage.

It can also result in a shutdown of critical infrastructure, electric grids, and nuclear power plants, the leak of a boat-load of classified government data, and the public disclosure of enormous amounts of personally identifiable information.

The Danger in Humans Trying to Mimic Machines.

Taken to a non-hypothetical extreme, these breaches will all someday have the potential to collapse entire economies, drive the descent of what we now think of as political civility and order into chaos and anarchy, cause an unrecoverable compromise of national security, and enable the theft and manipulation of all PII resulting in a complete mistrust in the underlying security of personal identity. In almost all of these instances, the cause can be traced back to human error around cybersecurity.

Most CISOs, understandably do not believe their fellow employees are capable of safeguarding the data they handle on a day-to-day basis. One of the main reasons behind this apparent ineptitude is the reality that most of the cybersecurity solutions used by a majority of our enterprise workers are difficult to manage. In order to be productive, most employees develop well-intentioned workarounds that create brand-new vulnerabilities against which no defense has been identified or imagined.

The AI Antidote: Human Enhancement, not Replacement

All of us work in highly pressurized and stressed environments and most of us use multiple computing devices throughout the day, many of which are mobile and small-screen in nature. Our best intentions and sometimes our most contrived workarounds yield the intersection of speed and malicious invention. More simply, malicious actors fully understand our vulnerabilities and leverage them through increasingly sophisticated and artful social engineering to their advantage.

Our inability to match our adversary’s speed and cunning creates an opportunity for artificial intelligence (AI) to assist in rescuing human frailty from its own DNA.

But this isn’t the AI that we see in the movies or that we read about in science fiction. Instead, this is the class of AI that IBM’s CEO Ginni Rometty speaks about when she says, “Some people call this artificial intelligence, but the reality is this technology will enhance us. So instead of artificial intelligence, I think we’ll augment our intelligence.”

This is the AI that provides a bridge across the chasm between productivity and security and can enable the notion of an “invisible security” that can grow and evolve against threats as they occur, thus potentially leveling the grossly imbalanced playing field for threat actors.

Modern threat vectors morph at the speed of light, which of course is well beyond the reach of human response. However, the speed of AI matches that morphing pace, and with the proper training, machine learning algorithms can detect the new threat embodiments before they can grab a foothold.

Removing the Cognitive Biases in AI

The challenge with current AI solutions is that they cannot function without human assistance. That means that whatever programming or tuning is necessary for detection and blocking is going to be based on our own experience and will by definition embed our entire set of cognitive biases and reasoning errors into the result.

Biases such as “availability” could cause the availability of information about cybersecurity attack “trends” to influence what we train our AI systems to watch for in the way of threat vectors.

“Confirmation” bias may influence decisions by experienced security analysts that what has happened prior to past data breaches are the keys to detection for future attempts. This bias of course becomes a weakness as analysts tend toward regularly investigating incidents in ways that only support their existing beliefs.

Fundamental “attribution” biases lead security analysts to conclude most frequently that the acronym PEBKAC (Problem Exists Between Keyboard and Chair) is in play in almost every security breach and may result in overly weighting AI training in that direction.

The objective in creating AI solutions that will assist in fighting against polymorphic attack vectors should be to build harmony between the best characteristics of human behavior and the most effective characteristics of our current state of AI technology as it is today and not based on what it could be in the future.

In fact, human behavior and AI technology can actually compensate for one another’s weaknesses. AI is obviously faster than we are and is incapable of error beyond what we impose by way of faulty rules and reasoning, and humans can manage and limit or expand the technology’s capabilities as appropriate to the targeted tasks.

The present and real opportunity in AI to assist with creating improved cybersecurity profiles and a more defensible threat landscape depends on our ability to approach the application of “augmented intelligence” with an optimized sense of purpose.

Though this characteristic is largely not noted in human endeavor, higher intentions focused objectively oftentimes result in the best outcomes.

Read more: