Errata Statistics

Although my publisher assures me that some errors always make it through the proofing process, each one is still frustrating to the author – mostly because the author had the chance at some point to catch almost every one of the errors.

My wife teaches math at a local community college and had taught math in a high-school for many years. She tells me that every text book she ever used had an errata sheet and that some had up to three pages of errata. In the proofing, we find scores – even hundreds – of errors, so it must be likely that some will get through. Can this likelihood be computed? If you read my book, you would say “Of course!” So I started thinking that if several people each find a number of errors over a period of time, I should be able to estimate the undiscovered errors.

One method I discuss in the book talks about methods for problems like this, including the catch, release & recatch approach for estimating fish populations. If two independent error-finding methods find some of the same errors but they each find errors the other did not find, then we can estimate the number that they both missed. I mention in the book that this same method can apply to estimating the number of people the Census missed counting or the number of unauthorized intrusions in your network that go undetected.

I had another method that I considered including in the book but, in the end, decided to leave out. This method is based on the idea that if you randomly search for errors in a book (or species of insects in the rain forest, or crimes in a neighborhood) the rate at which you find new instances will follow a pattern. Generally, finding unique instances will be easy at first but as the number

Welcome

Welcome to the consolidated forum for How to Measure Anything and The Failure of Risk Management. If you would like to make comments on either book, ask questions about examples, get philosophical about the nature of measurement and risk, or challenge us with an impossible measurement, then there is a discussion for you. If you have ideas for structure of the forums including new topics, then you will find a forum just for that. Here are a few introductory comments:

1) This is a moderated forum. I’m not too concerned about behavioral rules because I reserve the right not to post. But criticism or lack of agreement with me on any topic will NOT be considered as a factor in rejecting or keeping a post. I may reject a post if it is an identical question to one I responded to in another post.

2) I will try to approve posts ASAP and usually within a week. I apologize in advance if it ever takes longer.

Other than that, please contribute your thoughts on these topics and encourage friends and coworkers to do the same.

Thanks,

Douglas W. Hubbard

It’s All an Illusion

It’s All an Illusion by Douglas Hubbard
Boston Society of Architects, Sept/Oct 2008.

Doug Hubbard reviews his original “anything can be measured” concept for an audience of architects. The message of the reasons why some things still seem intangible or immeasurable are refined and restated in the first article since the release of Hubbard’s seminal book, How to Measure Anything. [inactive link]

Getting More Precise at Risk Assessment

The Editors
Eweek, 11/10/2003

Baseline interviews Doug Hubbard about how to start thinking about IT risk more like an actuary would, instead of the subject “1 to 5” scales. Another consultant interviewed for the story says scientific measurements are “ideal in theory” but most managers aren’t well versed in statistical research. Actually, HDR uses scientific methods in practice (not just theory) and total mastery of the subject by management is not a constraint (do managers really understand network management, database design or encryption?). Fortunately, the editors allow Hubbard to respond to the objections. [view article]