Illustrating the Hidden Risk Accepted by Using “Classic” Information Risk Assessment Scores and Matrices
A follow-up to Launching your Cyber-Risk Quantification Journey with Confidence
In a recent post titled “Launching your Cyber-Risk Quantification Journey with Confidence”, I made references to how traditional information security approaches to identify and communicate risk often appear to be “easier” than Cyber-Risk Quantification (CRQ), but end up obscuring or avoiding inherent uncertainty and complexity in the process – thereby actually increasing risk by providing false confidence.
To make some of that discussion a little more concrete, this post will provide a simple example to illustrate some of those limitations and how CRQ can solve for them – as long as we are willing to accept that data is always imperfect, models are always wrong, and the future is always uncertain.
To that end, consider Illustration #1 below:
You may have encountered "risk score" matrices like this in the past. This matrix, like many of the others, tries to represent risk using frequency and magnitude but does not accurately capture the uncertainty and variability of future events. Specifically, as shown in Illustration #2 below, this example overlooks the fact that a single “risk scenario” may have frequent but small impacts and infrequent but large impacts – or any number of other distributions.
As a result, the matrix scores are inadequate as measures of risk and do not fully convey the complexity of the situation (Just because we have ‘numbers’ does not mean we have a ‘measurement’).
If this is the case, then, which of the ‘numbers’ best (whereas best means the most effective at supporting a risk management decision) describe ‘risk’ in a single frequency and magnitude scoring system like the one in Illustration #1? Do we use the most frequent? The most severe? The average? Or, something else?
It turns out, unfortunately, that, on their own, none of the ‘numbers’ describe ‘risk’ in its entirety. All available choices still obscure pertinent information.
Consider Illustration #3 below: You can observe two ‘risks’ that have very similar minimum, maximum, and average loss exposures.
What is not apparent in these results, though, is the much higher “Most Likely” event frequency of ‘Risk #1’ than ‘Risk #2’ and the much higher ‘Most Likely’ impact of ‘Risk #2’ than ‘Risk #1’, demonstrated in Illustration #4 below.
This information matters greatly when evaluating controls and risk responses. For example, if we apply a preventative control to both ‘Risks’, as demonstrated in Illustration #5 below, both ‘Risks’ show evidence of control impact; however, ‘Risk #1’ has a materially higher degree of risk reduction (roughly 1 percent of the original exposure) than ‘Risk #2’ (roughly 10 percent of the original exposure).
Beyond the ‘hidden’ risk uncertainty and false confidence in decisions described earlier, the gap in risk reduction here illustrates how the construction of and assumptions embedded within assessed risk scenarios often matter in ways that are easily obscured by oversimplified ‘scoring’ and ‘score presentation’. In this example, specifically, the approach did not describe the full risk distribution in its results and consumers would find it difficult using its results to really know how to respond (if they ever even realize there is a gap at all to deal with).
As noted in the last post, we can do better: measures that cannot clearly and directly be used to make decisions make poor metrics.
CRQ and Open FAIR™, specifically, not only capture any uncertainty inherent in the future with ranges, but they also allow you to include and progress past any uncertainty arising from gaps in your available knowledge and data sources. Further, by providing the full lose exposure curve, risk managers can make more informed decisions as to what drives loss exposure and where it might best be reduced.
In the end, your role as a risk analyst or manager (and everyone is a risk manager) is to work towards the best decisions possible – not perfect ones.
Summary
If you find yourself sitting down in front of an Open FAIR™ ontology and your SMEs don’t know all the facts and data sources seem hard to find, that’s ok. If your data sources and SMEs can’t agree on what the facts and data mean, that’s also ok. If you are working through this process by yourself without SMEs of data sources, that is, also, ok.
In each of those cases, widen the ranges of your inputs and document what you know and what you don’t know. Open FAIR™ (as implemented, for example, by the Ostrich Cyber-Risk CRQ Simulator Tool) will not only help make what information you do have more meaningful, but the process itself will identify key risk visibility gaps in your organization. Additionally, if your organization converts knowledge of those gaps into actions to reduce them, then the previously unidentified, undescribed, and unmeasured risks rising from making overly confident or poorly informed decisions will also be reduced.
To learn more, please contact me directly, or set up a 1:1 demo here of the Ostrich Cyber-Risk CRQ Simulator.
Author’s Acknowledgement: This post, my past posts, and future ones on the topic of risk and CRQ rely on personal experience heavily garnered by shared knowledge and wisdom provided by a passionate and generous CRQ community over the years. It really is fantastic to see, at least in this small way, the state of the world continuing to improve.