A Risk-Based Approach to Cybersecurity

The Challenge in Quantifying Cybersecurity Risk

A couple of years ago, the New York State Department of Financial Services passed 23 NCRR 500, which created a set of stringent cybersecurity requirements for financial services companies doing business in New York State. “Doing business” in this context means providing financial services to anyone living in the state regardless of whether the provider had an office or even representatives in the state. An example would be a life insurance policy held by someone living in Buffalo issued through a broker in Miami and underwritten by Lincoln National in Indiana.

This set of regulations was the first of its kind and became the model for other states and even the federal government to start following over the next few years.

It was also the first to advocate a risk-based approach to cybersecurity.

Creating a Common Language for Risk

The foundation of the regulation is the requirement for a complete risk assessment that makes the regulation more adaptable to process and technology changes and allows entities to actually focus on their security program rather than acting as just another compliance mandate. A rigorous and quantitative risk assessment also provides CISOs and CFOs with a common language and useful information that they can integrate into their financials, risk transference strategies, and cost/benefit analyses for new technology investments that might improve their overall enterprise risk.

In other words, they might be able to start understanding each other.

The Immaturity of Asset Visibility

Complying with the regulation’s required risk assessments, and managing cybersecurity in general, requires companies to have visibility into their assets, especially those that if compromised, would most impact the business, customers, and shareholders. Like Equifax and Facebook painfully demonstrated, information asset management at most companies is not very mature. It is hard to believe that if Equifax management understood that the business impact of a breach would have resulted in a loss of $4 billion, they would not have done whatever it took to apply that Apache patch.

And today, of course, the Microsoft breach exposes at least 250,000 companies globally running non-cloud versions of Exchange and it is doubtful that any but a very small majority have any idea about which assets are at risk or what the value of those assets happens to be.

Traditional Frameworks Making Matters Worse

The beginnings of a risk assessment require an inventory of a company’s information assets and their associated business value, yet even businesses that have a solid understanding of their information assets have likely not quantified the impact of a loss of those assets due to a cyber breach. Traditional risk frameworks, while useful in providing guidance toward setting up a risk assessment are actually part of the problem. The risk framework process is tedious and the journey through the myriad of corporate subject matter experts can be difficult on a good day. There is also a heavy reliance on practitioner intuition and experience, industry lore, and best practices. The tendency is to dither and debate not just the valuations themselves but the process that is required to get there.

Due to these obstacles, most companies attempt a risk assessment only once a year or when they are forced to by regulations, audit requirements, compliance issues, or by a third party.

A FAIR Standard

FAIR (Factor Analysis of Information Risk) is a risk framework that has managed to gain acceptance as an international standard for establishing information risk and is designed to address some of these framework weaknesses.

The FAIR framework aims to allow organizations to speak the same language about risk, apply risk assessment to any object or asset, view organizational risk in total, defend or challenge risk determination using advanced analysis, and understand how time and money will affect the organization’s security profile. All good.

FAIR uses dollar estimates for losses and probability values for threats and vulnerabilities. Combined with a range of values and levels of confidence, it allows for true mathematical modeling of loss exposures.

Problems with FAIR

The problem with FAIR, apart from its implementation difficulty, is that it approaches risk analysis by deconstructing a large problem into small problems, the solutions to which rely upon making guesses that it presumes will result in solving the large problem more accurately. This thesis is not only incorrect, it leads to a larger problem in that the sum of many small guesses only results in a larger aggregated guess.

Instead of reducing the problem space, it actually expands it.

An even larger problem is that FAIR defines risk as the “probable” frequency multiplied by the “probable” magnitude of future loss. Probability is defined as a determination supported by evidence strong enough to establish presumption, but not proof, and by definition, creates an unproven basis for presumption and conclusion.

It approaches cyber risk this way because it assumes that there is no way to actualize the information asset value at risk. But, as you will see, we now have a way to do exactly that.

One Big Guessing Game

If a practitioner follows FAIR’s Risk Assessment Guide, s/he will see that in 7 of the 10 steps it calls for an “estimate” (of probable threat frequency, control strength, vulnerability, probable loss, worse case loss, etc.). With every reference to the word “estimate”, the practitioner is accepting a best guess that may not be backed by any objective criteria or metric and may be driven by an organizational agenda. Because all the derived values are based on those estimates, the FAIR risk model results in a great deal of work and by its very definition, a distorted view of actual risk.

Most businesses use some form of threat and vulnerability identification software that collects log data in an effort to identify vulnerable network devices that are being attacked. But none of these conventional cybersecurity solutions tie those vulnerabilities and threats to specific assets.

Connecting Webs of Confusion

As a result, it has been impossible to connect the dots between the data coming from those solutions and the assets actually at risk. Knowing that thousands of vulnerabilities and threats are impacting thousands of servers does not identify the devices upon which the highest-value assets are stored or processed. Nor does this knowledge translate to which actions should be taken to minimize the company’s exposure.

In fact, today’s threat detection systems alert SOC teams and IT responders to threats under attack which may not expose critical assets, because thus far, these systems are incapable of prioritizing the criticality of these devices based on asset value, a lot of time and resources are wasted responding to and eradicating low-priority threats.

The identification of vulnerabilities can only become meaningful when they can be prioritized for action based on the loss impact and criticality of the information and systems to which those servers are connected, and the vulnerability being exploited.

Technologies to Provide Visibility

There are several technologies available today that either use the FAIR approach or one of the major risk frameworks as a basis for developing a risk-based cybersecurity management program and they factor in simulated threat scenarios to approximate what might happen if a certain set of events occurs.

There is also a slate of early-stage companies that tie assets to devices and related CVEs and report asset threat escalations in real-time both in technical terms for the SOC teams to pursue and in financial terms so the C-suite can have visibility into a true threat landscape based on asset values at risk.

The Best Time to Start is Now

But whatever approach and technology you choose to begin working on your own risk-based cybersecurity management program, today would be a good day to start. Every business, regardless of size, will benefit from engaging in a risk assessment process as it will provide increased visibility into the dangers of a cyberattack across all categories of their information assets, and enable the evaluation of day-to-day vulnerability exposures on the basis of actual threat data correlated with asset values at risk, instead of guesstimates and speculation.

In risk management, the journey is often the reward.

Just the activity of attempting to identify and understand all of the possible threats, vulnerabilities, and impacts to critical assets will result in a reduction of uncertainty and collaterally a reduction in risk. And if you operate in New York State, each provision of the NYDFS regulation requires that addressing the vulnerabilities of the public-facing applications that contain customer PII will take precedence over internal applications with less sensitive information, and it will be necessary for you to do it anyway.

Many other states will follow this year and next with copy-cat regulations, so it will soon fail to matter, regardless of where your business operates.

Added Benefits

The big bonus, however, is that you might finally be able to align your cybersecurity budgets with the actual risk to your enterprise, communicate effectively with your CFO and your Board of Directors, optimize how your constrained cybersecurity resources are applied, and avoid becoming the next Equifax, Marriott or Capital-One.

Read more: