Canary in the Cybermine

Artificial Intelligence is Not Helping Cyber-Defense

I have been in the business of helping companies address cybersecurity issues for a long time. Most of what we suggest to prospects are solutions that have been proven in the field and enjoy successful track records in thousands of implementations. The benefits are known, measured, and validated. Still, much of the resistance we encounter centers around whether or not a given approach to a technical problem is “right” for a particular prospect.

Now, imagine how slowly enterprise adoption will be for AI-enhanced or AI-enabled products or services that have applications in improving performance, productivity, or protection when compared to the way things are done today. In marketing, for example, it seems ridiculously obvious to me that if I told you I could use AI to personalize email campaigns on a scale that was impossible for your human sales and marketing team to do manually, you would enthusiastically adopt automated personalized email.

Or, if I told you that I could integrate that function with an outsourced sales dev team and an email service that combined intelligent auto-dialing with account-based marketing and increase your sales performance by 500% at half of your current sales costs, you wouldn’t hesitate to adopt that approach either.

Or, if I had a technology that could ingest huge amounts of social media and tell you in real time what the market sentiment is for your product compared to your competitors broken down by demographics and location, you would want that technology right now.

What if I told you that I had an AI/ML solution that could predict the next cybersecurity attack in advance and identify the exact threat vector and vulnerability? Wouldn’t you want to implement that solution as quickly as possible?

When Possibility Meets Reality

The reality is that ALL of those capabilities exist today, yet the adoption rate for these technologies remains virtually flat. Why? It seems that people are not all that crazy about new ways of doing things and organizational entropy seems to increase in direct proportion to both the size of the entity and the perceived risk associated with change.

At its core, the concept of AI involves computational technology designed to make machines function with foresight that mimics and ultimately surpasses, human thinking processes. And, extreme forms of AI, in which thinking machines would “take off” on their own, modifying themselves and independently designing and building ever more capable systems unbound by the slow pace of biological evolution, are the forms that worry people like Elon Musk and the late Stephen Hawking as well as many cybersecurity experts today.

Fear of the Future

The problem with applied AI in the cybersecurity space is not that it can’t do the kinds of things that we hope it can do right now, but rather that we seem to be afraid to adopt it at all. That fear apparently goes to the impact that AI may or may not have on our own agenda, our own organization, our goals and objectives, and our own career well-being. We are not too crazy about adopting technologies we can’t control or about embracing organizational impacts that may threaten our own positions within the status quo. All understandable.

The Adoption Dilemma Compounds

But the adoption dilemma as it relates to cybersecurity is compounded ten-fold when we realize that threat actors have no such organizational hindrances or psychological barriers preventing them from quickly leveraging every AI technology advance to expand the scale and efficiency of their own attacks, while we seem forced into an endless loop of evaluation, study, discussion, and process.

There is no organizational inertia preventing the bad guys from broadening their privacy invasion and social manipulation capabilities, or their ability to compromise physical systems such as drones, robots, ICS/SCADA sensors, and driverless cars, or to penetrate everyday organizational information technology systems like the ones we rely on to protect our critical information assets. It is also interesting that as of only 5 years ago, IoT represented a ZERO risk in cybersecurity according to a BI Intelligence survey conducted at the end of 2015.

In fact, increasingly new and novel attacks that take advantage of an improved capacity to analyze human behaviors, moods, and beliefs should be expected, and very soon. After all, we now have similar technologies that can “listen” in on the other end of a sales conversation and dynamically alert the sales rep as to the correct tack to take with a particular prospect personality type … in real-time. One of the principal advantages of these algorithms is that they are really good at detecting, comprehending, and predicting human behavior.

Malicious AI Alert

The scientists at the Electronic Frontier Foundation, along with the Future of Humanity Institute, the University of Oxford’s Centre for the Study of Existential Risk, the University of Cambridge, the Center for a New American Security, and OpenAI have all collaborated on a 2018 report detailing the urgent threats from the malicious use of Artificial Intelligence in varying domains including cybersecurity.

And now, three years later, it has only gotten worse.

The report details the impact of fake news on the ability of democracies to sustain truthful public debates, through increasingly convincing synthetic or fake imagery and video, and a corruption of the information space with emphasis on the connection between information security and the exploitation of AI for malicious purposes.

AI-Powered Fake News

GANs or “generative adversarial networks” are two neural networks operating adversarially with each other designed originally to help us improve our ability to detect anomalies by enhancing images and training medical algorithms, but are instead increasingly used to create hoaxes, doctored video, and forged voice clips, aka fake news.

Previously, faked images and videos were relatively easy to detect because we could compare them with original source images already in existence on the web, but fake images generated by a GAN are impossible to detect because there is no original source and the discriminator portion of the GAN has nothing to compare to.  When a platform like Facebook, Google, or Twitter needs to police fake content, their sheer scale requires automation and would be an obvious application for GANs, except for the fact that a neural network built to fool a neural network will likely fool whatever algorithm a platform can present to it as well.

All of this of course is especially true with malware detection technologies.

While we take forever to think about adopting these technologies within our organizations, the bad guys are already using them to develop new and evasive malware that is having a party with our most advanced cybersecurity defense systems. Tomorrow they will be doing it with adversarial GANs while we are still contemplating policing GANs. This phenomenon is a further aggravation of the technology pillars of cybersecurity asymmetry, i.e., economics, information, education, and technology.

In this instance however, it is even more ironic since we are the ones who have created the AI advantage at places like MIT, Stanford, Carnegie-Mellon, Cambridge, Oxford, and Berkeley in the first place. But instead of carefully controlling the release of these innovations, we happily publish our findings for the whole world to see. This academic compulsion makes it much easier for countries like China, Iran, North Korea, and Russia to do what they do anyway: abscond with our R&D.

Dual Use of Artificial Intelligence

The report “suggests” that in response to the changing threat landscape in cybersecurity, researchers and engineers working in artificial intelligence development should take the dual-use nature of their work (dual-use for both good and evil) more seriously.

That means misuse-related considerations need to influence research priorities, norms, and institutions around the openness of research, including prepublication risk assessment in technical areas of special concern like information and cybersecurity, central access licensing models, and sharing protocols that should favor safety and security over open access.

At least in today’s world.

Putting Safety First

By releasing this report, the researchers hope to get ahead of the curve on general AI policy. Whether they are successful or not, it will matter little if our organizational targets continue to dither over using AI and ML in ways that can protect their assets and our privacy against adversaries who are now fully armed with the latest AI algorithms and techniques and leveraged to step up their game for a dangerous future state.

Learning from History

Getting researchers to focus on the ethics of the implementation of their technology rather than the novelty and engineering, might be an important step for the folks at these institutes but out here in the cybersecurity world, history tells us two things.

One, it is very rare that scientific breakthroughs deal with their ethical ramifications before the breakthrough happens, and two, businesses rarely favor the defensive applications of transformational technologies over the productivity gains and profitability results derived from ignoring them.

Read more: