On Preventative Operational Impact Metrics 1

Photo by Carl Newton on Pexels.com

BLUF – The disconnect between cyber and business is nearly religious – articles of faith for seekers of proof. Whether at a systemic or at an organisational scale, there is much to cyber business metrics. This is first of several posts on the topic.

Background

Lloyd’s of London has announced a shift in policy ending coverage for catastrophic nation-state cyber attacks. They determined it is impossible to estimate the exponentially growing costs of wide scale incidents based on the fallout from Russia’s NotPetya. Whilst reasonable, it begs bigger questions of systemic risk within cyber insurance. It’s easy to understand insurers not accounting for nation-state acts of war (if we use those terms), but what about risks borne of criminal events? Is the reasoning of not covering results from nation-state activity and their impacts going to be enough, or will they need to address the broader systemic risks, giving reason for policy adjustments at certain scales where we have inability to predict the outcomes?

Questions for another post, not today.

Today we will look at basic, smaller scale measurements: a challenging component. A major difficulty of cyber is creating meaningful metrics for the business to understand – particularly in questions of resource allocation. These discussions are often contentious as cyber security is considered a cost centre generating preventative measures. Say nothing of proving a negative, how do you measure what hasn’t happened?

Often the way we talk about cyber is based on the average cost of an incident. On average, an incident costs X dollars with Y more involved in further recovery over Z period of time. The incident hasn’t happened so the discourse is theoretical, speaking of potential loss mitigation. Essentially saying we are preventing you from dealing with the additional cost you’ve not yet paid and may never see.

And there is where the conversation breaks down.

Lloyd’s is saying what we’ve always known in cyber: regardless the scale, it’s difficult to calculate the result. For the practitioners it’s a matter of relaying things difficult to quantify. The fact they do their job means there (ideally) hasn’t been a catastrophic incident – or it was kept to a minor function/ region with minimal impact. Teams knowingly face Cypartans Dilemma, not just from a functional but also from a metrics perspective – without concrete data on monetary/ operational impact it’s difficult to get adequate funding; on the other hand, if there is concrete data, it means they lost. Many indirect supportive or innovative programs are discontinued for lack of monetary values associated with results when organisations ‘clean up’ or ‘reduce spending.’

Even programs with otherwise positive outcomes.

From the business perspective it’s no wonder why. How do you justify apportioning resources for unproven programs? Not just unproven for a year, we are talking decades.* For example, board audit committees (and accountants) may have a difficult time seeing the value in a program pre-emptively telling third parties of compromise prior to payload; they want to know – how much money was saved, how much loss was mitigated? If the organisation were hit, they would see exactly how much was required. From a planning and budget perspective, it is far easier to work with solid numbers. Not to mention there could be many other reasons or departments potentially responsible for why the organisation was not a victim.

What underlies this disconnect?

Limited resources for allocation drives both the need and concerns. There is only so much to disperse, and many of the other business units come with solid asks backed by quantified outcomes. When the business discusses resources with their cyber division – the amount to spend is easy enough, but the savings, risk mitigation or profit might as well be answered with infinity. Not good enough? Infinity plus one.

The other concern is of fiefdoms – when section leaders of the organisation go forth and carve out their organisational niche, requiring excessive resources for less necessary functions staffed by a growing population. Much as we need people in cyber, this happens from an internal politics perspective. Many people working in these fiefdoms spend endless hours creating and curating data with results largely uninterpretable for those outside. And as can be expected in any fiefdom, these are fiercely guarded by the leaders and their entourages. Nothing out of the ordinary here; this is just organisational realpolitik.

So, Lloyd’s is right – we’ve not found adequate measures for cyber yet. This series of posts will examine some of the requirements we need to build from for meaningful metrics, interpretable from both sides.

-scl

*In this case there is belief, but little proof exists outside of the incident response spaces. At an organisational level we see monetary result of response to action, as we know how much our response cost us, yet still cannot project how much money would be lost were the response inadequate.

One thought on “On Preventative Operational Impact Metrics 1

  1. Pingback: On Preventative Operational Impact Metrics 2 – Maelstrom Advantage

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s