Originally posted at http://www.howtomeasureanything.com, on Friday, September 11, 2009 9:02:59 AM, by Unknown.
“On page 47 of HTFRM Mr. Hubbard refers to the website for viewing an evolving taxonomy of major risks.
I am having trouble locating it, am I missing something?”
Not at all. We haven’t had much discussion about it so it was evolving very slowly.
But this is where we wanted to posts those risks. I wanted to keep a running tab of major risks and have discussions about prioritizing them. Eventually, this would be part of a growing GPM.
The important thing is to have something that is not static and currently, there really is no “real time” list of risks. Part of the problem in risk management is that it seems to be about the last big disaster. After 9/11, RM was about terrorism, after Katrina it was about natural disasters, and after 2008 it was about financial crisis.
The news gives us some list of risks but those risks are not exactly chosen for their validity as real risks. Still, it gives us a starting point. While we don’t want to merely react to the latest unique catastrophe, we need to know that the catastrophe teaches us what is possible. And if we think about it, it can be extrapolated to other possibilities.
There are several things that concern me for the next few decades. But the theme I would like to start with are a category of natural disasters so large that a comprehensive response by emergency services is impossible. Below I list two to start with. While these are major national and global catastrophes, they should be considered from the point of view of individuals and organizations.
1) A major earthquake on the west coast of the US. The largest earthquake scenarios are many times larger than we would have the resources to respond to. Individual organizations should consider how to relocate operations in that area, arrange transportation of staff, and ensure continuity.
2) The one in a few-century tsunami happening slightly more often. Rising oceans from global warming (regardless of what anyone thinks the cause is, the rising oceans are a fact) will have two impacts that compound in a potentially disastrous manner. Slightly heavier loads on the ocean floor as the sea level rises will have seismic effects that can not be confidently modeled by geologists. The slightly deeper water has a non-linear effect on tsunami size for a given earthquake. A large wave from oceans just a few centimeters higher will be able to travel much further inland. Combine this with the fact that current population density on all the oceans shores is much higher than it was during the last once-in-200 years tsunami. As devastating as it was (100,000+ killed) the 2004 tsunami was not near a major metropolitan population like the west coast of the US. It is just as likely that the next big one will near the US as anywhere else.
Any more ideas out there? Let’s not just post the latest big concern on the news, but situations that are so bad that it would be hard to think through the consequences. If disasters like those two above happened – which is quite possible in the span of the current generation – organizations and individuals couldn’t possibly count on assistance from authorities. Either of these events could be many times worse than Katrina. And California, a major part of the US economy, is particularly vulnerable to these scenarios. Events like these seem like science fiction to most people who haven’t actually lived through something like that. As I mention in the book, the mere lack of such events recently doesn’t change the risk, but it does increase our complacency.
Thanks for helping to get this started,
Doug Hubbard
Risk is a quantifiable liklihood of loss; this is a compound construct consisting of constituent parts that include threat, vulnerability, asset, and function/purpose. The greater question is how to deal with risk: accept, ignore (implicit acceptance), share, transfer, or mitigate. There is cost associated with any method to address risk, some costs are explicit (e.g. investing in a security mechanisms) and some are implicit (e.g. ignoring an imminent risk to the detriment of your organization).
My personal preference is start with a threat taxonomy and examine the threat space. Part of the threat space may include threats of intelligence and intent (i.e. people who want something you have). Examining their means, method, and motivation provides further insight into their capabilities, interest in you or your organization, and an estimation of the liklihood they will act. Prioritizing threats gives insight into the more likely threats you will have to deal with.
Now look at your asset space and vulnerabilities within that asset space that may present opportunities for exploitation. Again, prioritize vulnerabilities. Now compare high priority threats to high priority vulnerabilities… matches provide insight into risk.
All of these are estimates to reduce (certainly not eliminate) uncertainty and provide justification for intelligent allocation of resources; i.e. assign limited investement dollars for the best return in terms of increased safety… or better, in terms of the balance sheet and income statement.
Thanks for your input.
Is your exposure to risk assessment within some kind of security space – IT, terrorism or otherwise? Your terminology indicates that you were introduced to risk management in some field like that. In particular, actuaries, quants, and economists do not actually spend a lot of time distinguishing “threat” from “vulnerability”. Part of the point of my book was to show to people whose primary exposure to risk management was in one particular field that there is a much bigger world out there and some methods actually outperform others. When I do consult those in some security field that defines “threat” vs. “vulnerability”, I find this is an ambiguous distinction.
If we use terms more generally recognized by insurance, finance, engineering risk analysis and such, and combine “threat” and “vulnerability”, then threat space could be simply catagories of risks by the type of event that drives them (say, a hurricane). Then, as you suggest, this can be made into a matrix of different losses from the event by cross referencing with different asset classes. I think cross referencing by asset classes, loss events, business areas, and more are all helpful brainstorming methods.
I think it is important, however, to think of risk management not just within a specific field. A general-purpose taxonomy would be equally relavent to the legal departments, manufacturing operations, information security, and finance. From the point of view of security-related asset classes, what would you first suggest?
On a side note, you will notice in the FRM book that I don’t pull any punches in my evaluation of the quantitative validity of risk assessment methods in IT in general, but NIST 800-30 for security in particular. They tend to rely on scoring methods that can be shown to add no measurable value to the decison making process. I also show that with the right quantitative tools it is also practical to apply more quantitatively sound methods. I show a couple of examples of that in my first book (HTMA).
Conventionally, threat is external, vulnerability is intrinsic to operations. The difference is mainly whether it’s in your power to fully address either one, and this will affect your course of action, and the costs.
My exposure to risk is in terms both of insurance and security; however, more security influence. The purpose of distinguishing ‘threat’ from ‘vulnerability’ is to devise a method for intelligent allocation of resources. Meaning, you only have so many security dollars to spend… where are you likely to get the best return on investment.
The framework starts with a threat taxonomy to examine what threats are relevant to you; e.g. earthquakes are more common in California than in Maryland, therefore the threat of an earthquake has more meaning to a business in San Francisco than in Baltimore. For a Baltimore firm to spend dollars on earthquake resistent buildings and to purchase earthquake insurance would not be an intelligent investment of security dollars. Likewise, Omaha Nebraska doesn’t have a lot to worry about with respect to hurricanes.
First, identify the threats that have meaning to you; e.g. your corporate headquarters is in a flood plane. Then you can examine your asset space for vulnerabilities relevant to the threat; e.g. the data center is in the basement. Well, given the threat, moving the data center to the third floor (above the expected high water mark) or to another building is prudent risk mitigation and an intelligent allocation of resources.
While my thoughts along these lines are security related, my driving principle is: business need drives technology investments, business risk drives security investments; i.e. there is no justification for a security investment without a distinct business case for reducing the liklihood of an incident occuring or the reducing the severity of impact.
Your experience then is as an IT security expert within the insurance industry as distinct from someone who does quantitative risk analysis for insurance. So you have empirical evidence that this particular practice of distinguishing vulnerability from threat improves ROI? I would be very interested in seeing the math on that. The actuarial profession has apparently existed for well over a century without that distinction. While the probability and magnitude of an event must often be decomposed into a large number of components – each as a long chain of possible contributing events or conditions – the distinction of vulnerability vs. threat seems like an arbitrary point in a long sequence. Again, this is why you will not actually find that distinction very often outside of security. But I understand your point of view.
You are correct that all risk management comes down to reducing the severity and magnitude of impacts. Let’s start at the top of the hierarchy where IT security related issues are one branch of the entire field of risk management. Would you be interested in presenting a hierarchy within the IT security branch?
Doug,
I’m very interested to also engage in this study, as I am also in Information Security (just not as focused as kwillett).
If you decide to focus in on this area, reach out to me. Jack Jones is a proponent of F.A.I.R (Factor Analysis of Information Risk http://www.infosecramblings.com/exploring-fair/ ) and I’d like to see where we can make our field better in these too parallel areas.
Regards,
Phil Agcaoili
It would be great to have you on board. I briefly interviewed Jack Jones when doing research for my second book and I’m superficially familiar with FAIR. I think it would be greatly improved with use of calibration and some of the other methods I mention in the second book.
Did you read the part in the first book about Peter Tippett? He seems to have a very pragmatic and quantitatively oriented view of IT security.
What is the current opinion about the key risks in IT security?
Thanks for your input,
Doug Hubbard
My background is as a project management professional who got involved with risk, and later with the integration of models and language between risk disiplines. From this experience and research, and from reading the FRM book I would conclude that your very rigorous approach – including the much liked comments on the PM approaches to RM – is but part of the Risk Opera.
The segragation of and later integration of ‘threats’ and ‘vulnerabilities’ is seeking the same purpose as your maths models, in that it is trying to consolidate casually identified ’cause-impact-effect’ risk and convert it into coherent and understandable data for stable managment assessment. From this understanding more mathematically rigorous analysis is possible.
The problem this refers to is often found, in engineering for example – my educational background – , in that there is a tendency to reach for the computer prior to sorting out the validity of the concepts to analyse. The ‘threats’ and ‘vulnerabilities’ matrix is, as I see it, a very justified step in a wider risk managment approach, as it then highlights the options to be cost consideredm and maps all the costs and benefits to be included for the completeness of the analysis.
In project management, a parallel would be looking at ‘threats’ and ‘flexibilities’. So often the magnitude of impacts is magnified or worse, later risk in the project and/or the project deliverable is increased, due to unreasonable or uneccessary constraints being inforced.
I would endorse your dislike for scored risk analyis – and its usual and totally distracting partner ‘The 10 Top Risks’-, but would make one exception. When time is pressed or data is deliberately not provided then such scored models can be very useful when used to compare options. Indeed, even purely graphical (no scaling) models can aid such decisions. These tools are purely comparative and I would agree are not valid in overall project evaluation.
A final point, assumed knowns or neglected facts, are often the source of risk loss, for example hurricanes in Omaha. If one occurred the downside could be very sustantial or even business threatening, but it is very rare and hence the insurance industry may be a very cost efficient way to cover this high impact – very low probability risk. Conversely, frequent claim risk areas may be indicative of a need for business change and the insurace cover may be cost inefficient as opposed to process change costs.
Thanks for a thought provoking book
Richard Watson
Richard,
Thanks for your input. When you say that the threat vs. vulnerability distinction is “coherent”, that is begging the question. My point was that it is often not coherent. There are many situations where that distinction doesn’t make sense. In an attempt to make RM “stable” (whatever that means) this taxonomy is applied in situations where it isn’t necessary. Futhermore, the idea that threat, vulnerability should be simply multiplied together is an entirely arbitrary choice of possible functions. I don’t state that using threat and vulnerability is always wrong – I just say it isn’t necessary and is, in fact, confusing in many of the situnations it is applied.
You say that when “time is pressed or data deliberately not provided then such scored models an be useful”. But the research I cite indicates that the perception of usefulness in this case is a placebo. The use of scores does not alleviate the lack of time or data – it just makes you feel better about it. But if you have new research that shows a measureable improvement in forecasts and/or decisions as a result of using these scores (pressed for time and data or not), then I would certainly be very interested in it. I could only find research showing the opposite.
In any event, I don’t see how the use of scores would alleviate the problem where someone might “reach for the computer” too soon. The use of any particular taxonomy or matrix can be done with or without a computer and the use of computer models can be done with or without that particular taxonomy or matrix. I certainly don’t endorse the uninformed use of a computer model. I just don’t see how the use of this approach is either a necessary or sufficient condition for avoiding the misuse of computer models.
Regarding your final point, yes, it is certainly the case the assumed knowns or neglected facts are often the source of losses. Is your point that – somehow – a matrix or particular taxonomy reduces the occurance of that more than the lack of a matrix or the use of a different taxonomy? Is there research showing that the errors removed by these methods are measurably greater than the errors that tools like scoring models are known to add? I cite several sources in the book that quantify the extent of errors added by such methods. In order to make your case, we would need to show that errors added by the methods are significantly less than the errors avoided. There appears to be no such literature making that case, but if someone has recently published something that makes this case, I would be very interested in it.
Thanks again,
Doug Hubbard
Many thanks for your comments, which I respond to as follows:
Taking the last point first, sweeping statements like ‘not worth insuring for hurricanes in Omaha’ are not valid unless you have investigated the cost vs cover, and have taken in that the potential downside is an area not used to dealing with huricanes may be worse than in a state more familiar with them. The insurance industry 2×2 P-I model would indicate that this should be subject to risk transfer consideration (e.g. insurance cover)
Two other clarifications are needed, I consider:
“I don’t see how the use of scores would alleviate the problem where someone might “reach for the computer” too soon”
Totally agree, my point was that many of the standards define an ‘identify – analysis’ sequence which aligns with many technical individuals desire to produce a ‘calculated result’ (i.e. “reach for the computer”). I consider that there is an essential intermediary step of ‘understand, organise, and properly enumerate the identified data’ … then and only then ‘analyse in real number terms’. It is the omitting of this step, I consider, that is a fundemental problem with many standards which results in partially informed ‘scoring techniques’ (i.e. they are a symptom of the problem not the cause)
“an attempt to make RM “stable” (whatever that means)”
In the project environment there is a great tendency to identify and record risks & counteractions…. The End. The approach I use is to examine the risks identified and then seek to redefine as many as possible such that they can be managed by ‘repeatable, more reliable, and auditable processes in which change is contrable and recordable’. In this way a more stable project operation is created that is trying to manage fewer risks on the fly. The benefits from this have been observed in many projects, and have resulted in real cost savings.
I do not have any documented evidence for the use of ‘scoring’ in comparative risk assessments, it is purely my experience, and that of others, that I have found them to be beneficial to move a discussion forward or to defeat discussion blocking based on the ‘anybody with half a brain’ arguments. I do not hold them up as a perfect tool, but they can be catalytic to generating real discussion and information seeking activiity.
And so to the final point:
What I understand your modelling to be targeted at is improve decision making, and I totally appreciate your drive to make that the best possible information possible. I also appreciate, and I hope participate, in your critical thinking approach to ‘defined best practice’, often so defined by institutional committees.
From concept position you are taking data from a risk environment that is disorganised and often partial and presenting it in an organised, properly enumerated, and analysed format that can be used to inform and compare decisons. From my clarification above, I see this as moving the data from confused and limited individual risk data to data that can be used in consided decisions and processes. This is also what I consider ‘kwillett’, and in a diffferent way myself with say ’cause-effect’ models, is also trying to do. The fact is we are doing it further back down the ‘feed chain’ of risk understanding and evaluation, operating with basic risk identification data, rather than from the viewpoint of your work on the rigorous analysis of risk evaluations.
A paper on the above and the use of comparative scoring is ‘Using Cause-Effect Analysis to Improve the Management, Control, & Reporting of Risk’ – Andrew Moore, 6th Risk Conference 28 Nov 2003 London.
regards
Richard Watson
Richard,
You honestly think further analysis of insurance coverage of hurricanes in Omaha could even possibly contradict my “sweeping” statement? The physics of hurricanes is the key to computing the probability of a hurricane in Omaha. My statement is only as “sweeping” as the fact that what drives hurricanes is a very large body of warm water and the lack of such anywhere near Omaha. (Of course, there is tornado insurance in Omaha but, unlike hurricanes, it is possible to somewhat diversify risks selling tornado insurance by selling more of it in the same area. The destruction of tornados is much more concentrated and may damage some blocks while leaving the rest of a city unharmed.)
I realize that the benefits of scores are based on “purely [your] experience”, but that’s the problem. Experience can be (and has been, many times) contradicted by scientific evidence. This is why the experience of doctors or patients are not sufficient for clinical drug trials. In an article I wrote for Analytics Magazine, and in some of the new material I added for the second edition of How to Measure Anything, I cite the research from various areas that show (quantitatively and scientifically) that it is possible to have increased confidence from some method without a measurable improvement. As the decision psychologist Paul Schoemaker put it best, “Experience is inevitable, learning is not.” He knew (based on scientific measures, not experience) that it is possible to have a lot of confidence in one’s experience and yet not perform better than an inexperienced person or, in some cases, even random chance.
When we resort to experience alone, we are relying on an imperfect method of recall and imperfect inferences from the recalled experience. We tend to only recall anecdotes – not statistical aggregates – and the anecdotes we remember most are the ones that confirm our perception of “experience”. Furthermore, we will make various classes of incorrect inferences from that data. Where I challenge scoring methods, note that I don’t do so using only my experience. I’m citing the supporting research. In order to argue that my conclusions are wrong, you will have to point out the errors in all of that research or possibly cite other research that contradicts it. It would be circular reasoning to attempt to refute the claims “Experience is flawed. Scores are flawed.” with the counterclaim “Scores are valid based on my experience.”
I checked out that paper. As I suspected, it was another “how to” without offering any scientifically-valid evidence at all that it improved decisions. It’s just not a scientific paper like those that I cited. No data from contolled experiments are offered where different groups used different methods over a large number of trials and a statistical analysis of the results showed how this group outperformed others using other methods or even unaided intuition.
There is, however, evidence from controlled experiments where the use of decomposition improved estimates and this would certainly qualify as a category of decomposition. But scores are another things. The author resorts to scores not only without offering any evidence of their benefits but without addressing (or apparently even being aware of) all of the errors the research I cited points out. The author doesn’t appear to be aware of the research of Tony Cox or David Budescu about the errors introduced by scoring methods. Nor does the author appear to be aware of the research of those like Craig Fox, who show how arbitrary features of scales have a much larger influence on their outputs than we might suspect. The author doesn’t even seem to be aware of the research on framing and anchoring and how even the order of how we score things changes the scores.
But thanks for your input.
Doug Hubbard
Hi Doug,
I’ve read somewhere (likely in your book) that people find it difficult to estimate very low probability events, and that they are inherently bad at it. The theory is that this has evolutionary roots: If cave man Joe didn’t know of anyone who was attacked by a sabre-tooth tiger, he didn’t spend a lot of time pondering the odds. He was better off spending his time mitigating the real risks to his survival. So, since there was never a genetic need to be good at estimating low probability events, our brains never developed the skill.
Consequently, unless they have the data, it is difficult for the average person to estimate the probability of their house burning down, totalling their car, or dying in a plane crash. And if they tried, their estimate would likely be inaccurate, maybe off by orders of magnitude.
Strangely, we do seem to be more comfortable estimating risk (i.e., Consequence x Probability) directly; eg., if I would be willing to pay up to $X for auto, home or travel insurance, it means I estimate the risk to be X.
But, does more comfort mean that the estimate is better?