Originally posted on http://www.howtomeasureanything.com/forums/ on Monday, December 22, 2008 4:23:05 PM, by Dynera.

“Hi Doug,

I have your book on order and I was wondering if you cover the measurement of preventation effectivenss. For instance, I was involved with a software architecture team who’s purpose was to vet architecture designs. We recieved several feedback forms saying that our input was useful but besides that we really didn’t see any other way to demonstrate our effectiveness.

Any feedback is appreciated.




First, my apologies for the long delay in my response. Between the holidays and site problems, it’s been quite a while since I’ve looked at the Forums.

Vetting architectural designs should have measures similar to any quality control process. Since you are a software architect, I would suggest the following:

1) Ask for more specifics in your feedback forms. Ask if any of the suggestions actually changed a design and how. Ask which suggestions, specifically, changed the design.

2) Count up these”potential cost savings identified” and count these up for each design you review.

3) Periodically go into more detail with a random sample of clients and your suggestions to them. For this smaller sample, take a suggestion that was identfied as one that resulted in a specific change (as in point #1 above) and get more details about what would have changed. Was an error avoided that otherwise would have been costly and, if so, how costly? Some existing research or the calibrated estimates of your team might suffice to put a range on the costs of different problems if they had not been avoided.

4) You can estimate the percentage of total errors that your process finds. Periodically use independent reviewers who develop separately assess the same design and compare their findings. I explain a method in Chapter 14 for using the findings from two different vetters to determine the number of errors that neither found. In short, if both vetters routinely find all of the same errors, then you probably find most of them. If each vetter finds lots of errors that the other does not find, then then are probably a lot that neither find. You can also periodically check the process by the same method used to measure proofreaders – simply add a small set of errors yourself and see how many are found by the vetter.

This way, you can determine whether you are catching, say, 40% to 60% of the errors instead of 92% to 98% of the errors. And of the errors you find, you might determine that 12% to 30% would have caused rework or other problems in excess of 1 person-month of effort had it not been found. Subjective customer responses like “very satisfied” can be useful, but the measures I just described should be much more informative.

Thanks for your interest.

Doug Hubbard