Disadvantages With Technology in Cybersecurity

Technology is Great, But it Ain’t The Answer

The first major technologies were tied to survival, hunting, and food preparation. In 2.5 million years, nothing has changed.

The thesis for cybersecurity is simple: We have too much, it’s the wrong kind, and it does us little good.

I know I just made about 4,500 enemies. One of each of the 4,500 product vendors in the cybersecurity space. But if we are, to be honest about our present state it would be hard to argue that we don’t have enough technology.

Wouldn’t it?

Technology Fanboy

I am a huge fan of technology and all of the vendors in this space. My entire orientation throughout my IT career has been on the technical and operational side. My cybersecurity experience has always been in the trenches, analyzing, assessing and remediating. I have worked with teams of bankers and industry leaders on process, compliance and audit, but my interest and passion are in technology.

So, when I ask how many SIEMs does it take to screw in a lightbulb? How many EDR products? How many firewalls, network behavioral analytics tools, anti-virus offerings, vulnerability management platforms, threat intelligence feeds, etc.? It is about having seen so many of these products at work, demonstrating their strengths and weaknesses, that when I hear about a new product or even service in the space, my initial reaction is disbelief and skepticism like most every other CISO I know.

BeyondTrust, Thycotic/Centrify, One Identity and CyberArk all make a great Privileged Access Management product, so do we really need 20 other leaders, challengers, niche players and visionaries in the Gartner Magic Quadrant for PAM? Fortinet makes the best SIEM product on the market. Do we need another 18 vendors? Have any of these Gartner-approved solutions solved for data breaches?

Based on the available statistics, they apparently have not.

Gartner?

Does anyone pay attention to Gartner anyway? Everyone in the industry whom I know, pretty much understands that if you are a large-check-writing Gartner customer, you kind of get to call the shots as to how you are categorized and positioned and in which quadrant you get placed, right?

Small check writers, not so much.

How is this a legitimate industry analysis? And, what do actual end-users think about all these products? You know, like CISOs?

Who knows?

Why?

Because no one asks them.

So, Who Knows Best?

Recently abandoned as a bad idea, the insurance industry decided a few years ago that through the Marsh brokerage unit of Marsh & McLennan Cos., a group of insurers would evaluate cybersecurity software and technology sold to businesses, collate scores from participating insurers, and with the assistance of (wait for it) … Microsoft, they would identify the products and services considered effective in reducing cyber risk. Yes, this is the same Microsoft that recently disclosed a waterfall of vulnerabilities across its product line.

Everyone is susceptible, but asking a leading technology vendor with a poor record of cyber defense in its own product suite to sit in judgment over other vendors’ efficacy in reducing cyber risk seems a bridge too far to me.

The theory behind this plan is that a collaborative effort across many insurers has a better chance of bringing to light weak cybersecurity products that should be avoided by manufacturers in global supply chains.

Called “Cyber Catalyst”, the Marsh initiative was supposed to focus on offerings that address risks such as data breach, business interruption, data corruption and cyber extortion. They included technology-based products such as firewalls and encryption, tools for monitoring threats, training and incident response planning.

Advantage: Bad Guys

In addition to the over-abundance of redundant technologies, none of which appear to be capable of stopping cyberattacks, we have an asymmetrical disadvantage in the attacker/defender dynamic. The attacker has at its disposal, the very latest pre-programmed kits and techniques available both as software agents and as a service that can be used to penetrate and disrupt our latest defenses.

We, in turn develop new defense techniques whose effectiveness increases rapidly until it reaches a level of effectiveness that prompts adversaries to respond.

Attackers quickly discover ways to evade the defensive technique and develop countermeasures to reduce its value. That is the cycle we have been stuck in for years.

Good for attackers. Bad for defenders.

In the meantime, we have just expanded our threat landscape through an almost universal embrace of an ideology called “technological solutionism”. This ideology is endemic to Silicon Valley and it reframes complex social and technical issues as “neatly defined problems with definite, computable solutions … if only the right algorithms are in place!”

This highly formal, systematic, yet socially and technically myopic mindset is so pervasive within the industry that it has become almost a cartoon of itself. How do we solve wealth inequality? Blockchain. How do we solve political polarization? AI. How do we solve climate change? A blockchain-powered by AI. How do we solve cybersecurity attacks? A blockchain-powered by AI with some advanced predictive analytics and a little machine learning.

This constant appeal of a near-future with perfectly streamlined technological solutions distracts and deflects us from the grim realities we presently face. You need only attend one RSA conference to grok that reality.

RSAC: The End

The preeminent cybersecurity conference on the entire planet has degenerated into a carnival atmosphere with barkers, cash-giveaways, side-shows, dancing girls, skimpily clad booth hostesses, and serious booze parties.

Not a critique, just reality – I fully understand the challenges of attracting potential buyers to your pitch amid the noise and chaos of 4,500 competitors – one must do something.

But if you just dropped in from Mars, you would conclude that cybersecurity is an annual comedy event where 4,500 vendors participate in the art of inflated promises, hard-ball sales tactics, cherry-picked customer success stories, collusive relationships with a handful of leading industry analysts, supported by equally skimpily clad evidence of success, all culminating with a crazed, super-bowl party in February in San Francisco.

Given our recent bout with the pandemic, it is likely that attendance will be substantially down in coming years, providing cover for marketers who don’t believe that the customer acquisition cost justifies the spend – and we are blind to the probability of a return to normal or when that may happen.

Rendering trade shows a risky expense.

It turns out that a joint research tool created by CNN and Moody’s, the Back to Normal Index reached 92% as of mid-year 2021. This analytic scale — which touches on many aspects of life in America — set ‘normal’ at 100, representing how things were going in March 2020.

A return to normalcy includes a return to business as usual and the consideration of trade shows as a potentially enduring source of new business generation in America.

The index is made up of consumer credit scores, unemployment claims, job postings, air travel and hotel occupancy data — and attempts to be a barometer of recovery.

92% implies we’re almost there, but that last mile might be a little tricky – working from home is now accepted and expected, geographically dispersed workforces are now normalized and while enthusiasm for vacation travel is high, that may be factoring into the air travel data in misrepresentative degrees.

Finally, we are not out of the COVID woods yet.

The Death of Trade Shows

Data suggests that the B2B trade show market in the US was worth 15.58 billion U.S. dollars in 2019 and took a massive hit in 2020, dropping 75% in value. If the events markets were the pathways for new solution vendors to enter competitive spaces, then the prospects for recovery look grim in years to come, as industry analysts don’t predict a return to 2019 levels for 5 more years.

What do trade shows have to do with technology?

Unfortunately, a lot.

For example, AI dominates current technology discussions from boardrooms to venture capital LP meetings, to CISO conferences and the State Department. China continues to march far ahead of us in AI and ML technology, having stolen much of it from our technology startups and has developed Quantum solutions we are still trying to understand. What do we do instead of developing our Quantum capabilities? We haul folks like Zuckerberg in front of Congress and get his promise to develop better AI for content moderation.

AI Anyone?

But AI remains the tent-pole of the cybersecurity technology framework today. The now old joke continues that if you want to raise VC for your cool new cybersecurity whatever, make sure you include about 25 references to AI throughout your pitch deck.

To build cyber defenses capable of operating at the scale and pace needed to safeguard our information assets, artificial intelligence (AI) could be a critical component in the tech stack that most organizations can use to build a degree of immunity from attacks. Given the need for huge efficiencies in detection, provision of situational awareness and real-time remediation of threats, automation and AI-driven solutions should be a major contributor to the future of cybersecurity.

By efficient, we mean AI-based solutions that automate human analysis and then substitute in real-time, replacing a security analyst team for more accurate results.

We are not there yet.

And as we have seen, the cybercrime data to date is evidence that any technological developments in AI are quickly seized upon and exploited by the criminal community, posing entirely new challenges to cybersecurity in the global threat landscape.

ML: Risky Business

One weakness of machine learning models is that they require constant supervision to avoid becoming corrupted, which is something the bad guys manage to do. The use of AI and ML in detection requires constant fine-tuning, and AI has yet to invent new solutions to security problems; its principal value has been in doing what humans already do, but faster.

Among the more nefarious uses of AI by our adversaries are worms that learn how to avoid detection or change behavior on the fly to foil pattern-matching algorithms. An active worm with lateral movement can roam targeted networks undetected for years.

Another risk is intelligent malware that can wait until a set of conditions is met to deploy its payload. And once attackers breach a network, they can use AI to generate activity patterns that confuse intrusion detection systems or overwhelm them with false alerts.

The highly targeted form of the phishing exploit known as “spear phishing” currently requires considerable human effort to create messages that appear to come from known senders. Future algorithms will scrape information from social media accounts and other public sources to create spear phishing messages at scale.

We already do something similar, relying on rough-hewn AI algorithms in sales, to identify potential prospects and distribute messaging around what our algorithms perceive as pain points.

Sometimes this works OK and sometimes it doesn’t. I am frequently reminded of a sales outreach email congratulating a Sales VP on his promotion based on data scraped from an obituary of his predecessor.

While we experiment and fund more start-ups, the use of AI by criminals will potentially bypass – in an instant – entire generations of technical controls that industries have built up over decades.

Fake Everything

In the financial services sector, we will soon start to see criminals deploy malware with the ability to capture and exploit voice synthesis technology, mimicking human behavior and biometric data to circumvent authentication of controls for assets found in people’s bank accounts, for example.

In short order, the criminal use of AI will generate new attack cycles, highly targeted and deployed for the greatest impact, and in ways that were not thought possible in industries never previously targeted: areas such as biotech, for the theft and manipulation of stored DNA code; mobility, for the hijacking of unmanned vehicles; and healthcare, where ransomware will be timed and deployed for maximum impact.

Biometrics is being widely introduced in different sectors while at the same time raising significant challenges for the global security community. Biometrics and next-generation authentication require high volumes of data about an individual, their activity and behavior. Voices, faces and the slightest details of movement and behavioral traits will need to be stored globally, and this will drive cybercriminals to target and exploit a new generation of personal data.

Exploitation will no longer be limited to the theft of people’s credit card numbers but will target the theft of their actual being, their fingerprints, voice identification and retinal scans.

Most cybersecurity experts agree that three-factor authentication is the best available option and that two-factor authentication is a baseline must-have. ‘Know’ (password), ‘have’ (token) and ‘are’ (biometrics) are the three factors for authentication, and each one makes this process stronger and more secure. For those CISOs and security analysts charged with defending our assets, understanding an entire ecosystem of biometric software, technology and storage points makes it even harder to defend the rapidly and ever-expanding attack surface.

This is the “solutionist” ideology at work in the real world.

Solutions For Nothing And No Solutions For Anything

AI and Biometrics in the near term are not going to solve any of the problems that our current technology stack can’t solve. Because most of our breaches and attacks come as the result of poor processes, inadvertent human error, insufficient human resources and skills, and either too many redundant technologies or too few of the wrong technologies. None of these problems will disappear because we have discovered the world’s coolest AI or Biometric solution for cybersecurity defense.

This “solutionist” ideology extends beyond cybersecurity and now influences the discourse around how to handle doctored media.

The solutions being proposed are often technological in nature, from “digital watermarks” to new machine learning forensic techniques. To be sure, many experts are doing important security research to make the detection of fake media and cyberattacks easier in the future. This is important work and is likely worthwhile.

But all by itself, it is unlikely that any AI technology would help prevent cyberattacks exploiting vulnerabilities that we fail to patch or to fix the deep-seated social problem of truth decay and polarization that social media platforms have played a major role in fostering.

I don’t think any technology argument would convince the remaining shareholders of Equifax that an AI solution would have automatically applied the patches necessary to prevent the Apache Struts attack. AI might have generated a loud alert that significant asset values were at risk, but the last time I checked, people still would have had to apply the patch and done the configuration management required to cloak the vulnerability.

System glitches don’t occur in a world that runs on the promise of AI or Biometric technology. Banking still runs most of its legacy systems on 220 billion lines of Mainframe COBOL code, written well before the turn of the century. In 2020, system glitches dominated broad outages triggered by a cyberattack.

There ain’t no magic wands that can automate legacy systems maintenance.

The 5G Rocket Man

And it is about to get far worse. A new generation of 5G networks will be the single most challenging issue for the cybersecurity landscape. It is not just faster internet; the design of 5G will mean that the world will enter into an era where, by 2025, 75 billion new devices will be connecting to the internet every year, running critical applications and infrastructure at nearly 1,000 times the speed of the current internet.

This will provide the architecture for connecting whole new industries, geographies and communities and at the same time it will hugely alter the threat landscape, as it potentially moves cybercrime from being an invisible, financially driven issue to one where real and serious physical damage will occur at a 5G pace.

5G will provide any attacker with instant access to vulnerable networks.

When this speed is combined with enterprise and operational technology, a new generation of cyberattacks will emerge, some of which we are already seeing. The ransomware attack against the US city of Baltimore, for example, locked 10,000 employees out of their workstations. In the near future, smart city infrastructures will provide interconnected systems at a new scale, from transport systems for driverless cars, automated water and waste systems, to emergency workers and services, all interdependent and as highly vulnerable as they are highly connected.

In 2017, the WannaCry attack that took parts of the UK’s National Health Service down required days to spread globally, but in a 5G era, the malware would spread this attack at the speed of light. It is clear that 5G will not only enable great prosperity and help to save people’s lives, it will also have the capacity to thrust cybercrime into the real world at a scale and with consequences yet unknown. The bad guys including our nation-state adversaries will be leveraging 5G for maximizing their illicit campaigns, while we will be peddling fast just to stay alive.

We don’t have the people or technology to combat and respond to the threats and we don’t have the discipline or resources to implement, manage and maintain the controls necessary to defend our assets.

Walking on Eggshells

The most dangerous element evolving from “technological solutionism” is not that industry leaders are coaxed into the chase for the next coolest bright shiny object. It is instead that the ideology itself is so easily used as a smokescreen for deep structural problems in the technology industry itself. What is now blindingly obvious to even the most casual observer is that technology alone had not been able to prevent breaches, loss of data, business interruption, data corruption and cyber extortion.

In fact, the more technology we develop and apply, the more money we spend on cybersecurity defense results in a greater increase in cybersecurity breaches. And those breaches are only the ones we a) know about and b) are reported.

Over the past decade, cybercriminals have been able to seize on a low-risk, high-reward landscape in which attribution is rare and significant pressure is placed on the traditional levers and responses to cybercrime.

What I find interesting amid this onslaught is that businesses of all types remain in denial about the threat. It is clear from 10-K filings that still today, despite countless warnings, case studies and an increase in overall awareness, it is only in the aftermath of a cyberattack that cybersecurity moves high onto the board agenda in a sustainable way.

In the year before it was hacked, Equifax made just four references to ‘Cyber, Information Security or Data Security’ vs a credit rating industry average of 17 and an overall US average of 16′.

In fact, Equifax’s frequency of four matched the average for credit rating agencies way back in 2008, implying a full decade of under-prioritization of security by the company. The term ‘cyber’ is featured more heavily in Equifax’s report today than that of leading cybersecurity specialist FireEye, who has 117 mentions of ‘cyber’ to Equifax’s 139. Equifax’s breach costs are currently running to $1.4 Billion, while FireEye’s entire operating expense equals $1.4 Billion over the same period.

Think about that.

Is it obvious that organizations with fewer references to cybersecurity in their annual reporting are less security mature and more likely to be breached? Or, is it more likely that cybersecurity is not high enough on the agenda for the board and executive to feature it in their flagship report?

With the annual report being such a significant communications tool, we can use it as an indicator as to the strength of the top-down security culture within an organization.

But so can our adversaries.

Easy Reads

In a stunning example of this information asymmetry, we see that cybercriminals can follow a similar process as part of their open-source intelligence, identifying likely corporate victims perceived as the lowest hanging fruit. It is not a coincidence that Marriot, Anthem, Equifax, Yahoo, Home Depot, Sony, Adobe, etc., were among the many with the fewest references to cybersecurity in their pre-breach 10’Ks.

If we stay in denial and do nothing to change the course, in the next few years, the cybersecurity landscape will worsen significantly and any chance of protecting information assets, assuring truthful social media and providing data privacy will disappear completely.

Existential threats? Forget about Global Warming.

Years from now, we all may all be speaking a different language.

History

World War II history revives similarities between then and now, and underscores the possibility, and in fact, the outright probability, of success in reengineering that asymmetry and creating an even playing field.

Professor Richard Overy, the famed British historian reminds us, that while in his prison cell at Nuremberg, Hitler’s foreign minister, Joachim von Ribbentrop, wrote a brief memoir in the course of which he explored the reasons for Germany’s defeat. He picked out three factors that he thought were critical: the unexpected ‘power of resistance of the Red Army; the vast supply of American armaments; and the success of Allied airpower.

0 For 3

British forces were close to defeat everywhere in 1942. The American economy was a peacetime economy, unprepared for the colossal demands of total war. The Soviet system was all but shattered in 1941, two-thirds of its heavy industrial capacity captured and its vast air and tank armies destroyed.

This was a war, Ribbentrop ruefully concluded, that ‘Germany could have won’.

Soviet resistance was in some ways the most surprising outcome. The German attackers believed that Soviet Communism was a corrupt and primitive system that would collapse, in Goebbels’ words ‘like a pack of cards.

The evidence of how poorly the Red Army fought in 1941 confirmed these expectations. More than five million Soviet soldiers were captured or killed in six months; they fought with astonishing bravery, but at every level of combat were outclassed by troops that were better armed, better trained and better led.

This situation seemed beyond remedy.

Yet within a year, Soviet factories were out-producing their richly-endowed German counterparts – the Red Army had embarked on a thorough transformation of the technical and organizational base of Soviet forces, and a stiffening of morale, from Stalin downwards, produced the first serious reverse for the German armed forces when Operation Uranus in November 1942 led to the encirclement of Stalingrad and the loss of the German Sixth Army.

Within one year.

Don’t Beat Them With Your Own; Just Copy What They Have

The Russian air and tank armies were reorganized to mimic the German Panzer divisions and air fleets; communication and intelligence were vastly improved (helped by a huge supply of American and British telephone equipment and cable); training for officers and men was designed to encourage greater initiative; and the technology available was hastily modernized to match Germany’s.

The ability of the world’s largest industrial economy to convert to the mass production of weapons and war equipment is usually taken for granted. Yet the transition from peace to war was so rapid and effective that America was able to make up for the lag in building up effectively trained armed forces by exerting a massive material production superiority.

This success owed something to the experience of Roosevelt’s New Deal when for the first time, the federal government began to operate its own economic planning agencies; and it owed something to the decision by the American armed forces in the 1920s to focus on issues of production and logistics in the Industrial War College set up in Washington.

But above all, it owed a great deal to the character of American industrial capitalism, with its ‘can-do’ ethos, high levels of engineering skill and tough-minded entrepreneurs. After a decade of recession, the manufacturing community had a good deal of spare, unemployed capacity to absorb (unlike Germany, where full employment was reached well before the outbreak of war, and gains in output could only really come from productivity improvements).

Even with these vast resources at hand, however, it took American forces considerable time before they could fight on equal terms with well-trained and determined enemies.

This gap in fighting effectiveness helps to explain the decision taken in Washington to focus a good deal of the American effort on the building up of massive air power. Roosevelt saw air strategy as a key to future war and a way to reduce American casualties.

At his encouragement, the Army Air Forces were able to build up an air force that came to dwarf those of Germany and Japan. At the center of the strategy was a commitment to strategic bombing, the long-range and independent assault on the economic and military infrastructure.

Bombing provided the key difference between the western Allies and Germany. It played an important part in sustaining domestic morale in Britain and the USA, while its effects on German society produced social disruption on a vast scale (by late 1944, 8 million Germans had fled from the cities to the safer villages and townships).

The debilitating effects on German air power then reduced the contribution German aircraft could make on the Eastern Front, where Soviet air forces vastly outnumbered German. The success of air power in Europe persuaded the American military leaders to try to end the war with Japan the same way.

City raids from May 1945 destroyed a vast area of urban Japan and paved the way for a surrender, completed with the dropping of the two atomic bombs in August 1945. Here, too, the American government and public were keen to avoid further heavy casualties.

Difficult Decisions Then; Impossible Decisions Now

There were weaknesses and strengths in Hitler’s strategy, but no misjudgments were more costly in the end than the German belief that the Red Army was a primitive force, incapable of prolonged resistance, or Hitler’s insistence that America would take years to rearm and could never field an effective army, or the failure to recognize that bombing was a threat worth taking seriously before it was too late.

Another 0-3 days at the plate.

Military arrogance and political hubris put Germany on the path to a war she could have won only if these expectations had proved true.

There are lots of moving parts, economic, tribal, social, political, geophysical, psychological and logistical that fall into the stew of wartime decisions. But they all do.

There are so many similarities in our current story when compared with our wartime history dating back only 70 years, we would be foolish to ignore them. Learning from history, however cumbersome, rather than repeating every step, is always a good strategy for survival.

It is now critically important for every American citizen to review its WW Two history against the current backdrop of this existential, cybersecurity threat; this undeclared declaration of war; this real and present danger to our lifestyles, freedom, beliefs, ideology, social and cultural fabric and our entire future way of life.

This threshold upon which we stand and teeter is about to be tested further through cyberattacks and counters with Russia and Ukraine and un-named third-party nation-state proxies, floating about server farms in Eastern Europe.

There Are Many Things We Can Do – Right Now

There is still time, and given that we can accept what must be done, we have steps and processes that will pave the way for action, but only if and when we are all committed to change. Right now, I suspect we have a larger pro-rata population of dangerously unconscious citizens than we did 82 years ago on that balmy December morning when the U.S. naval base at Pearl Harbor, Waikiki, Oahu, was ferociously attacked by swarms of soon-to-be enemy adversaries.

Changing the world has been a sinkhole of human energy for hundreds of years – changing ourselves is much harder to do, because it starts with admitting we have been in denial, but not unlike other habits, one step leads to another, away from our compulsions.

Professor Richard Overy’s brilliant historical novel, “World War Two: How the Allies Won” is a great place to start.

Previous Post
Cyberwar in Ukraine: Using Insecure Websites to Take Control
Next Post
Playing Cowboys in a Cybersecurity Warzone
Menu