The Risk Management Tool from Heaven

When Technology Actually IS the Answer

The cybersecurity world has long been accustomed to talking about risk in military terms, like Code red and condition yellow.

This nonsense has been the single roadblock to communication between CISOs and Board Members and C-suite for as long as I have been in the business. My favorite Chairman recently told me that he interprets these the same way he used to asses air travel risk based on the TSA warnings.

Regardless of color, he continued flying.

But as cybersecurity has now become an issue for business executives as much as technology managers, the language has changed and risk needs to shift to a quantitative conversation.

An Indicator of Maturity

Measuring cyber-risk in quantitative terms not only allows CISOs to have a different level of conversation about the probable impact of threat, it is a start on a path away from the detection-based reactive mindset and toward a risk-centric approach to determining how certain assets will be treated in cybersecurity terms. It is also indicative of a growing maturity in the field of cyber-risk.

There are a number of organizations that have developed varying tools to support this movement. The International Organization for Standardization (ISO) and the National Institute of Standards and Technology (NIST) have created sweeping, comprehensive standards. And a tool like the Factor Analysis of Information Risk (FAIR) while badly flawed, has managed to take the high ground as a practical framework that helps organizations uphold standards that relate to cyber-risk.

My dislike of FAIR aside, I am deeply grateful that someone has arranged the guardrails that enable a movement toward quantified cyber-risk management.

Building a Better Tomorrow

What CISOs should do is adopt an approach that uses technology to leverage the wide variety of data points already generated by deployed security devices and other sources to model and calculate technology risk in near-real time.

The result of these calculations can then be conveyed to stakeholders in a format that they can use; quantitative representation of monetary risk for executives and a constantly updated prioritized list of risk to address for IT security personnel.

And as IT security personnel address each risk, the executive is notified of the risk reduction. And unlike traditional IT security devices that are “tuned-down” to eliminate false positives, this technology would assess ALL traffic – post control – that traverses the environment.

This way, an organization can align its cybersecurity strategy with the risk management policies used by the rest of the business, enabling more accurate and robust business decisions. The benefits of this approach are:

1. Precision Cybersecurity Defense

The ability to detect and alert for threats outside of the existing threshold parameters of SIEMs and other network monitoring technologies combined with the identification of relevance to critical assets at risk would enable response organizations to focus on what matters and a) Require fewer headcount for analysis and remediation b) Concentrate and prioritize remediation activities only on important assets, and c) Identify previously hidden vulnerability probes that lead to breaches

2. Continual Automated Risk Assessment

This technology would default to continuous analysis, so that every element in the computing environment would be constantly assessed against threat factors while adjusting quantified risk levels and presenting the results on a minute-by-minute, hourly, daily and monthly periodic reporting schedule.

It would eliminate the need to run separate risk assessment tools and perform quarterly risk assessment campaigns. The result would not only be a major upgrade in controls and governance, it would reduce cost and organizational headache required for manual assessments, eliminate organizational resistance, and increase risk confidence.

3. Continual Compliance

Every current cybersecurity regulation from NIST 800-171, to HIPAA to PCI/DSS requires an increased frequency for validation of security controls and vulnerabilities. While each control for every regulation is individually called out, collectively they create a flow.

The full set of controls follows a cycle.

This technology would provide a flow of continuous vulnerability and threat validation on quantified assets actually exposed, their remediation status, and the change factors that put compliance at risk. It would fully satisfy every regulatory requirement related to vulnerability assessment and threat validation, identifying and evaluating existing security controls, and calculating risk levels in real-time.

4. Enriched Governance, Risk and Compliance (GRC)

In addition to regulators pressuring companies to put their GRC house in order, financial institutions are starting to label non-compliance as a credit risk. If a business is non-compliant with the security standards for their industry sector, they become technically not insurable and thus fail to qualify for loans or lines-of-credit increases. Moody’s decision to cut its rating outlook for Equifax from stable to negative was a classic example and foretold the future impacts of poor governance. When different departments use their own processes and tools, it’s difficult to assess risk and compliance holistically.

Uniting analytics and reporting activities through one platform enables organizations to develop accurate, data-driven action plans to address GRC exposures. The results from this technology would become lifeblood for a unified GRC system, providing executive management with a true and present portrait of their cybersecurity posture enabling them to make intelligent decisions about risk transfer, acceptance, reduction or elimination.

5. Strategic to Operational to Tactical Communications

A technology like this would close the disconnect between the language of security and the language of risk, which makes it easier for a CISO to play a meaningful role in the enterprise risk management (ERM) function.

In ERM frameworks, the word “risk” carries a very particular meaning. Cybersecurity leaders who evolve from the technical side as most do, tend to focus on very tactical technical issues, rather than bottom-line business impacts.

A business-focused description of an un-patched vulnerability for example, would be that patching the vulnerability will reduce the probability of a breach to a particular database, which if exposed, will cost a specified amount of money in lost business, fines and remediation expenses.

With this technology, the organization could determine whether a mitigation plan would make bottom line sense — or if the reduction in risk isn’t significant enough, or the database isn’t critical enough, and the company is better off spending time and money elsewhere.

6. Cyber-Insurance

Cybersecurity insurance covers areas of liability not covered by traditional policies and often includes costs arising from data destruction or theft, extortion demands, hacking, identify theft, denial of service attacks, and crisis management activities related to breaches.

Cyber-risks have been difficult for insurance companies to quantify due to the lack of actuarial data. Insurers have often compensated by relying on qualitative assessments instead. These assessments become part of the terms of the coverage with any false statements voiding the contract. Thus, the policies are often more customized making them ultimately more costly.

This technology would positively impact cyber-insurance estimating, by increasing the strength of controls associated with threat detection, identification, remediation and recovery and factually quantifying the potential financial impact resulting from operating losses, staff expense, support, external expertise acquisition, etc., resulting in lower premium costs and increased opportunity for risk transfer.

7. Personal Liability for Directors and Officers

All of the high-profile data breaches at Equifax, Quora and Marriott highlight to Boards of directors that cybersecurity should be approached as an enterprise-wide risk management issue with wide legal and regulatory implications.

In its 1996 Caremark decision, the Court declared that directors can be held personally liable for failing to “appropriately monitor and supervise the enterprise” and this ruling has become the gold standard under which derivative lawsuits are being filed following a cyber-breach. In that ruling the court emphasized that a company’s board of directors must make a good faith effort to implement an adequate cybersecurity information and reporting system.

Failing to do so can constitute an “unconsidered failure of the board to act in circumstances in which due attention would, arguably, have prevented the loss.”

Obviously, the more requirements for information, quantification, and governance the Board insists on from the supporting IT and InfoSec organizations, the greater the chances that they and the officers of the company will be protected from personal liability lawsuits.

This technology would produce unprecedented visibility into cybersecurity threats, vulnerabilities and risk, satisfying the “monitoring, supervision and due attention” argument to the fullest extent possible.

If you would like to gain a better understanding of what’s possible with this technology, send me an email and I will connect you to the magic lab. It’s real and its amazing.

Previous Post
Here Comes 5G
Next Post
Being More Human in a World of Marketing Robots
Menu