Jack Jones, the inventor of the FAIR method for assessing cybersecurity risk, comments on a defense of NIST 800-30 by someone who commented on one of his blogs.  I take Jack’s side on this.  For those of you who have read my books, NIST 800-30 is one of the standards that promotes methods I spend a lot of time debunking (ordinal scales for risks, risk matrices, etc.).

For context on Jack’s post, Jack made an earlier post criticizing NIST 800-30 which was then commented on by a defender of that same standard.  Jack’s response is, as always, right on track.  The comment demonstrates again that the resistance to quantitative methods in cybersecurity is not just based on a lack of familiarity with quantitative risk analysis methods, but a belief that one actually understands such methods when one clearly does not.   In this particular case, the person who commented attempts to correct Jack on what likelihood means “in the strict sense” and then proceeds to get it entirely wrong.  More importantly, the entire comment presumes a “frequentist” (but still incorrectly explained) approach to the problem.  Anyway, see for yourself (click on the image above.

HDR completed a survey about this which shows such misconceptions are common in cybersecurity.  We offer that survey separately or as part of a package with the How to Measure Anything in Cybersecurity Risk webinar.  Such misconceptions get in the way of adopting measurably-better quantitative methods in favor of risk matrices that have been debunked by multiple researchers cited in my books.