I get many emails about the validity of Risk Maps, Risk Scores and Risk Matrices in risk analysis. I know there are a lot of passionate proponents of these methods (and many heavily-invested vendors) but there is still no evidence that such methods improve forecasts or decisions.
Read my skeptical look at many risk analysis methods in the OR/MS Today and Analytics articles (See the Articles page). I compiled quite a lot of research in these articles and even more in my book that should lead any manager to be suspicious about whether the perceived effectiveness of these methods is real or imagined.
I’ve updated the upcoming second edition of How to Measure Anything with even more research in that vein. Stay tuned and post your comments.
I’m looking forward to the new edition of your book. Thanks.
Thanks again for your support. I think you will like the new material. The book expanded by more than 20 pages and quite a lot of other pages changed.
Great article yet again and so very true.
Although I agree with your point, “decisions may be worse than what unaided intuition would yield” http://twitter.com/Hacksec/statuses/10354088764 , I also believe that the business is more likely to listen when we use structure to support our opinion (in my case using a risk analysis). Perhaps from a practical perspective in my field (information security), business decisions are easier made using risk examples so this is quick and dirty method to avoid wasting time on the calculations and provides a good balance.
You make several points I should respond to separattely. Here is a three-point response:
1. Does it matter at all whether the analysis method actually *works* or is the only issue is whether it is “structured”? If structure is all that impresses them, and not a measurable improvement in decisions, then astrology should suffice. It doesn’t work, but it has structure.
2. If the method doesn’t really work then why does it even matter that they listen to it?
3. Why assume calculations are a “waste of time”? The evidence shows that the doing some of the math I mentiion actually does improve forecasts and decisions. The evidence also makes it clear that the risk anslysis based on ordinal scales are the real wast of time. Or are you saying that the that it would take you too long to do meaningful calculations that would provably reduce decision error? I don’t really find that to be the case. If the latter, that can be solved.
I just spoke to a group of CISO’s today about this as well. I think they all left convinced that most of the methods they use are a placebo effect – it makes people feel better but doesn’t actually improve anything. Whehter a decision analysis method works really SHOULD matter. Otheriwsie, it really is a waste of time.
Thanks for your comment,
There is much ground to cover in improving risk assessments and make them more fact based. Many times methods are used that are not fit for purpose or even do harm. Triggered by your book I took a look at the way public risks are being assessed in the Netherlands, for example in case of storage of dangerous gasses or substances. Dutch law prescribes specific methods and parameters to be used in that kind of analysis. These prescribed methods are carved in stone and cannot be adapted to better model the risks to be analysed. You will agree that the value of that risks analysis will be questionable. To try and improve this, I submitted several articles on the flaws of the risk assessment methods used and the consequences it may have to the press, also Dutch television made a news item about it. See my blog (http://john-poppelaars.blogspot.com/) for further details. Things are getting into motion now, but there is still a long way to go to improve on the methods used.
Thanks for your observations and your work in this area. As you have already seen, much more attention on this subject is needed.
Thanks for your input,
I have very much enjoyed your book and approach to measurement of risk.
I have a question regarding the measurement of impact (for a variable or event) as a means of calculating EOL where the impacted entity is not a for-profit group (it could be either a government agency or a non revenue profit center in a company). I have received significant pushback on the insistence that any impact does have a value that is quantifiable here.
I have taken the approach of valuing the minimum 100% impact as being the budget of the not for profit entity itself in a given year or over the period of the risks being measured. My rationale is that funds are given to the not for profit entity with the expectation that the entity should be producing benefit that is at least equal to their budget.
Any thoughts on how you have handled this in the past?
Regarding Risk Maps & Matrices (and hence Scoring) these are so deeply embedded in our risk culture there is, unfortunately, no room for debate. I undertook a pre-publishing review for a PRM book where the author left out PI diagrams because he saw no real value in them. This was a decision I fully endorsed. However the other reviewer must have insisted they should be in (unknown to me), as I was very suprised when I recieved my free copy to find a piece of text & diagrams extolling their virtues!!
One mapping I have used to illustrate risk (on a purely graphical/name format ….. with effort placed on removing ambiguity) are Impact -Manageability plots, which can generate very different reactions to the effect of PI plots. For example a 90% probability project threatening risk has to be made manageable and the cost of doing that has to be included in the project budgets or bid pricing …. no option!
But of course there is room for debate. And there will be much benefit in it.
Your experience with the review process shows a key point I make in the book. It is simply not considered necessary in the PMI culture to provide citations showing measurable benefits of a method before it is made a part of canonical methods. We can imagine what the pharma industry would be like if new drugs had the same lack of standards for demonstrating both safety and effecacy.
Thanks for your input,
I am looking forward to the new edition of the books and thanks for you work in the area. Yours book has a lot of key points which are really interesting and I would suggest everyone to read The Failure of Risk Management.
Just finished FRM. I am immersed daily in innumerable IT risk assessment questionnaires, which are now routinely hundreds of questions long and getting longer all the time. The relevance of some questions to the assessment of risk is hard to fathom. As a supplier of IT services we have a finite budget for reducing risk for customers. I hope to use the ideas in FRM to build an objective basis for deciding whether to invest further in a particular area. Comments from you and your readers who have made progress in this area are most welcome.