Artificial Intelligence meets Cybersecurity

Opportunities and Challenges

The development of artificial intelligence has become a national strategic imperative for countries as diverse as China, Russia, UK, US, France, Canada and thirteen other countries. Likewise, the market for artificial intelligence software automation is expected to grow substantially in the next 5 years. Estimates are the use of artificial intelligence will grow from approximately $10 billion in 2019 to $125 billion by 2025. As artificial intelligence in cybersecurity grows in importance, the role of information security officers will have to evolve in lockstep to address a variety of new challenges that have yet to be fully contemplated today. IT professionals cannot meet these challenges alone and will need help to respond to a myriad of policy, public and ethical concerns that will require partnerships across the spectrum of government, business and tech communities.

New artificial intelligence standards are being developed to address a range of issues, yet insufficient widespread attention has been paid in the public domain to gaps in policy. In order to remedy these gaps in standards, the American National Standards (ISO/IEC JTC 1/SC 42), NIST (EO 13859) and the UK’s Information Commissioner’s Office (ICO) have begun the development of frameworks and standards for artificial intelligence. The standards development process will take time, with different standards setters evolving along specific areas of focus such as data privacy, ethical use or the technical design of AI systems. More will need to be done as intelligent systems become more complex.

Artificial intelligence has become a buzzword used to describe even simple automation, but the differences are important to understand. Each level of automation plays a valuable role but different uses are performed for each type. Robotic process automation (RPA) is the lowest level of automation. RPA follows strict rules and, when programmed properly, executes repetitive processes such as automating accounting workflows, performing data collection, and automatically transferring information without human interaction. Organizations have begun to combine one or more levels of intelligent automation to achieve higher performance and operational efficiency. (See figure 1.)

Adversarial artificial intelligence has also grown in response to access to open-source AI research. Cyber attackers use of offensive AI is becoming more sophisticated with the capability to self-mutate as it learns from its environment while hiding in the background by leveraging existing data flows and human behavior to remain undetectable amid the noise across networks. Offensive AI will require new oversight models such as human/machine collaboration in decision-making processes where defensive AI security supplements traditional IT security posture. Offensive AI has already changed the aim of attacks from stealing data to manipulating or changing data, making it harder to detect the attack. Adversarial artificial intelligence is currently being used to weaponize human perception and existing defenses by impersonating trusted users. Prototype AI systems such as the Emotet Trojan pose a triple threat to security defenses through anti-analysis modules, advanced obfuscation and spreader modules delivered in a single distribution mechanism using formerly deployed bank trojans and worms. As next-generation AI attacks evolve, legacy security methods may be unable to identify these asymmetric threats without AI-enabled defensive strategies as well.

Artificial intelligence represents a novel risk, meaning predictable threat models and activity will be neutralized as adversarial AI systems change their behavior in real-time in response to defensive strategies in order to remain undetected. As organizations increase dependencies on devices and networked systems inside and outside the organization threat vulnerabilities will grow faster and become harder to detect without advanced systems to enable security. The integration of human-machine interactions where AI decisions are relied upon or used in conjunction with human actors represents dynamic risks requiring higher levels of attention. The Boeing 737 MAX is one example where over-reliance on AI and an underestimation of risks can result in catastrophic failure even in industries, like aerospace, where machine learning has a history of use assisting pilot performance. IT professionals will need to develop new skills and processes alongside defensive AI systems in response to these new threats.

Information security executives will need to work with senior leadership to establish ethical standards and governance models for the use of artificial intelligence. Artificial intelligence will require clarity at the enterprise level involving each of the respective oversight functions (audit, risk and compliance) and improvements in the Three Lines of Defense model for better governance. Cross-functional teams of oversight and business leaders may be required to establish new operating models from which IT assurance can be formally established. Subjective risk assessments will be insufficient in an AI world where the risks may be exponential in reputational damage and threaten the survival of the firm.

Getting started

Senior executives and the board must become advocates of active cyber defense in response to the outstanding governance issues related to the gaps in artificial intelligence policy and economic challenges as a strategic digital imperative for future growth. The $100 trillion digital opportunity identified by the World Economic Forum will not be achievable without leadership from the top.

In the meantime, new standards are being developed by global standards-setters which will address some of the gaps identified here. New standards will not be enough. Technology moves faster than standards and cyber adversaries are incentivized to innovate to leverage the vulnerabilities in legacy systems and outdated security practice. Accepting cyber risks as a cost of doing business is short-sighted given the exponential rise in threats to future growth, national security and prosperity in society. Cybersecurity professionals may be on the front line in the war in cyberspace however this war will not be won without the attention, resources and support of leadership, government and mutual global partnerships.

Previous Post
Cybersecurity and Board Oversight: Bike-shedding Gets in the Way
Next Post
A Cybersecurity Education Conundrum
Menu