Elements of Cyber War is part of a four part series by Steve King. He leads the Advisory Services practice at CyberTheory.
In our previous installment, we discussed the economic conundrum of cyberwarfare and the various types of pressures organizations must face to battle their nefarious opponents. In this installment, we move into the world of Information.
Information? What do we mean exactly?
In the context of cybersecurity, we are not talking about information warfare per se, or even intelligence about threats, though it plays a small factor. What is typically meant by information warfare conjures up the recent Russian meddling and Asian psycho- warfare, and is certainly not new. Threat intelligence has been around a while as well, though there have been some recent advances which are interesting and may be useful to help us get to know our adversaries better.
To be precise, the information theater to which we are referring relates to one of the core elements of the attacker/defender dynamic where our attackers know lots of stuff about us, while we know very little and in many cases nothing about them. This of course, provides a tremendous battlefield advantage to the other team. This asymmetric element sets our very siloed and segmented defenses up against masquerading attackers about whom we have almost no information, and they consequently require very little of their own to be successful.
Informational asymmetry also results in our continuing failure to identify the exploitation of legitimacy (fakery) or ability to correctly attribute the source or nature of our attackers.
We are never sure whether Russia or Iran or China or young Robert Francis Baker living in his mom’s basement down on First Street is the actual attacker. And it dramatically affects our ability to respond or to even develop a policy for response protocols.
As one of many examples, it appears that China likely recruited the hacker who pulled off the massive cyber- attack on Anthem where 78.8 million consumer records were exposed … but we don’t know that for sure. Even though seven state insurance commissioners conducted a nationwide examination of the breach over the last 3 years and in addition hired Mandiant to run its own internal investigation, we still don’t know.
In spite of uncovering only the apparent source IP address, this army of investigators concluded that the hack originated in China and began when a user at an Anthem subsidiary opened a phishing email which gave the hacker access to Anthem’s data warehouse.
The hack was of course devastating to Anthem and the 80 million covered who lost all of their sensitive PII, but while we now know how it was carried out, we are unable to conclusively determine the actual perpetrator.
A widening information gap
The result of all of this investigation and the more than $300 million Anthem has spent in recovery and forensics is a slight increase in general awareness about the nature of our adversaries, but a widening of the actual information gap itself. We think it was China and we “know” they are always doing this sort of thing, but that information does not advance our ability to defend in the future.
We don’t know who we are fighting
This information gap contributes to another imbalance in attacker/defender dynamics where we stack up a relatively small contingent of trained defenders protecting millions of applications and systems located in fixed positions against tens of thousands of unknown global cyber attackers continuously examining tens of millions of dispersed targets.
In terms of military tactics, state armies like ours generally fight in an orderly framework while non-state and individual terrorist organizations successfully use guerrilla warfare methods designed to leverage the disparities in power advantage.
Since we don’t know who we are fighting and we must defend fixed positions without specific rules of engagement, it makes it quite difficult to successfully engage and almost impossible to imagine victory.
Our adversaries regularly probe and collect reams of information about our cybersecurity defenses. This is not difficult to do since we openly publish all of our academic cybersecurity research and in the few cases where we don’t publish, our adversaries just steal our IP anyway.
Reverse engineering an AI/ML cyberdefense system to discover the methods it uses to provide that defense is not hard to do. Which systems deploy which technologies is easy too. U.S. product vendors proudly broadcast the sources and at a macro level, and even the techniques.
A classic example can be found in Security Information and Event Management (SIEM) systems. Love them or dismiss their effectiveness, a SIEM is a widely acknowledged cybersecurity fundamental requirement for monitoring, detecting and alerting in real-time the presence of malware or a threat vector present in our computing infrastructure.
Though there are other approaches like network behavioral monitors, all enterprises must have some way to determine the presence of a threat and to notify early respondents so they can move to mitigate before the damage spreads.
The Achilles with SIEMs and with all other behavioral detection systems is the detection threshold. Those thresholds (aka policies) for the detection of certain behaviors must be set low enough so that a brute force password attack (for example) cannot evade the detection but not so low that activity other than brute force attacks triggers an alert and results in a false positive.
Set too low, the system will generate tons of false positives. Set too high, and the system will fail to catch true predators. These threshold variations are not secrets.
SIEM and network monitoring vendors publish the default ranges along with recommended settings. Since IT resources are under continual pressure, the natural response is to accept the defaults and install as recommended. This enables even the most dim-witted attacker to tune the vulnerability probe to fly beneath the radar and look for software holes, open backdoors, available credentials and other keys to the kingdom. Those findings are reported back to the C&C and this data informs the next cyber-attack on that enterprise.
We don’t know what we don’t know
These low and slow vulnerability probes go on for months and years, collecting and distributing useful information back to the attackers. Probes are likely floating around your network infrastructure as you read this. By having less information than our attackers, we unintentionally provide a fully cooperative pathway to the next series of breaches.
Less is not more
Attackers frequently scan many thousands of potential targets before a successful compromise, and much is learned from each one. We have an abundance of online hacker communities willingly dispensing reams of information, how-to tutorials for every kind of hack imaginable, instructions in the use of available open-source penetration testing tools, malware kits tailored by attack type with complete user manuals that describe in detail the steps required to deploy.
The defenders have information security communities like ISACA and ISC2, industry conferences like RSA, and vendor product user groups but our ability to connect information to execution does not compare to the way the bad guys do it.
Big hat, no cattle
In other words, we talk a lot about this stuff, but we do very little in terms of actually implementing best practices.
Our adversaries are busy deploying their well-informed attack protocols while our information security community is continually distracted by the necessities of daily survival.
Back when we first advanced this thesis in 2015, an ideologically driven actor (who was known as HackBack!) was running rampant with a slew of significant cyber-attacks including a big one against the Italian surveillance technology company “Hacking Team”. Motivated by what he perceived to be human-rights concerns, Mr. HackBack! managed to compromise their internal network and publicly release 400GB of data which included email correspondence between employees at the company and their clients, proprietary source code, financial records, sensitive audio and video files. Then he published a set of instructions detailing exactly how he mounts his attacks including the schematic for a zero-day exploit that he had developed himself.
He then published a list of off-the-shelf tools and specific guidance on using exploit kits to carry out similar compromises. After the hijacked data was made publicly available via Twitter, and a fully searchable database was hosted on WikiLeaks, the company suffered significant and embarrassing reputational damage and had a global operational license revoked.
In just a few days after the breach, two exploit kits, Angler and Neutrino which have now morphed into far more advanced EKs, had incorporated new exploits revealed in Mr. HackBack!’s publications, increasing their functionality and assisting other cybercriminals to compromise new targets with new malware.
Let’s just share
In response to the growing imbalance, the federal government began encouraging businesses to share threat intelligence among themselves, but almost every business has ignored the suggestion. We keep trying to address the issue in forums, seminars and conferences instead.
There is sort of a weird layer of general apathy that hangs around the surface of this industry, where it feels constantly like a streak of existential acquiescence, as in … “there’s really nothing we can do but let’s keep pushing this boulder up the mountain anyway.”
On one promising front, we have recently made some progress in threat intelligence technologies that may have an albeit small but favorable impact. Threat intelligence is actually a potentially useful way to start shifting some of the imbalance by providing insight into what the bad guys are doing and prompting companies to rebalance their cybersecurity defense portfolio accordingly.
It is one of the very few approaches that has a chance of actually providing a little information relief as it offers current insights about emerging threats and their evolution. These systems track adversaries across multiple types of unique and hard-to-reach online communities, from elite forums and illicit marketplaces to chat services platforms, and then they provide visibility into cybercrime and fraud practices, international, political and societal dynamics, trends with malware and exploits, specifics about disruption and destructive threats and physical and insider activities.
By providing an intelligence profile of the threat landscape, this contextual view offers concrete input to enable enterprises to more effectively rebalance their cyber-defense portfolios with respect to emerging and existing threats, adversaries, and relevant business risks.
Money and resources
This is of course a whole lot different than having Mr. HackBack!’s instruction manual and $50 for an exploit kit. But at least defenders now have some information about the adversarial community and may be able to determine effective cybersecurity technologies and processes to which it can shift the emphasis and stave off a few of these attack vectors. It costs a lot of money, takes a lot of resources, needs support at the executive level and is not easy to implement. It’s something, but nowhere near enough.
Knowing more about our adversaries and their behavior is about to become even more critical as we embark into the IoT world in earnest. A simple example of how IoT threats pose significantly higher risk can be found in the recent graduation of the Mirai botnet into a grander version of itself. This new descendant is casting a much wider net than its predecessors and is now infecting systems normally found within enterprises. And, very few IoT devices can be either updated or secured from being drawn into a botnet army.
Steps to battle advanced complexity
The attacker/defender dynamic in Information has grown in complexity and has broadly expanded the gap that we saw 4 years ago. So what can we co to address this gap?
1. Find a way to centralize and distribute threat intelligence in an efficient, actionable format.
2. Openly share the vulnerability data that we now hoard privately,
3. Mount an offensive against the markets, forums and illicit distribution centers on the dark web
4. Adopt a more aggressive cybersecurity posture so we can be monitoring adversaries in order to understand what they are doing and planning to do.
5. Organize a united, cohesive response within the community.
Until these steps are taken, it appears that the information gap will continue to expand.
Elements of Cyber War is part of a four part series by Steve King. Subscribe to get a sneak peak at part 3.