by Douglas Hubbard | Dec 22, 2008 | How To Measure Anything Blogs, Measurement Questions From Readers, News
Originally posted on http://www.howtomeasureanything.com/forums/ on Monday, December 22, 2008 4:23:05 PM, by Dynera.
“Hi Doug,
I have your book on order and I was wondering if you cover the measurement of preventation effectivenss. For instance, I was involved with a software architecture team who’s purpose was to vet architecture designs. We recieved several feedback forms saying that our input was useful but besides that we really didn’t see any other way to demonstrate our effectiveness.
Any feedback is appreciated.
Thanks.
Paul”
Paul,
First, my apologies for the long delay in my response. Between the holidays and site problems, it’s been quite a while since I’ve looked at the Forums.
Vetting architectural designs should have measures similar to any quality control process. Since you are a software architect, I would suggest the following:
1) Ask for more specifics in your feedback forms. Ask if any of the suggestions actually changed a design and how. Ask which suggestions, specifically, changed the design.
2) Count up these”potential cost savings identified” and count these up for each design you review.
3) Periodically go into more detail with a random sample of clients and your suggestions to them. For this smaller sample, take a suggestion that was identfied as one that resulted in a specific change (as in point #1 above) and get more details about what would have changed. Was an error avoided that otherwise would have been costly and, if so, how costly? Some existing research or the calibrated estimates of your team might suffice to put a range on the costs of different problems if they had not been avoided.
4) You can estimate the percentage of total errors that your process finds. Periodically use independent reviewers who develop separately assess the same design and compare their findings. I explain a method in Chapter 14 for using the findings from two different vetters to determine the number of errors that neither found. In short, if both vetters routinely find all of the same errors, then you probably find most of them. If each vetter finds lots of errors that the other does not find, then then are probably a lot that neither find. You can also periodically check the process by the same method used to measure proofreaders – simply add a small set of errors yourself and see how many are found by the vetter.
This way, you can determine whether you are catching, say, 40% to 60% of the errors instead of 92% to 98% of the errors. And of the errors you find, you might determine that 12% to 30% would have caused rework or other problems in excess of 1 person-month of effort had it not been found. Subjective customer responses like “very satisfied” can be useful, but the measures I just described should be much more informative.
Thanks for your interest.
Doug Hubbard
by Douglas Hubbard | Dec 9, 2008 | Errata, How To Measure Anything Blogs, News
Welcome to the Errata thread in the book discussion of the How to Measure Anything Forum. An author goes through a lot of check with the publisher but some errors manage to get through. Some my fault, some caused by the typesetter or publisher not making previous changes. I just got my author’s copies 2 days ago (July 20th) about 2 weeks before it gets to the stores. But I already found a couple of errors. None should be confusing to the reader, but they were exasperating to me. Here is the list so far.
1) Dedication: My oldest son’s name is Evan, not Even. My children are mentioned in the dedication and this one caused by wife to gasp when she saw it. I don’t know how this one slipped through any of the proofing by me but this is a big change priority for the next print run.
2) Preface, page XII: The sentence “Statistics and quantitative methods courses were still fresh in my mind and I in some cases when someone called something “immeasurable”; I would remember a specific example where it was actually measured.” The first “I” is unnecessary.
3) Acknowledgements, page XV: Freeman Dyson’s name is spelled wrong. Yes, this is the famous physicist. Fortunately, his name is at least spelled correctly in chapter 13, where I briefly refer to my interview with him. Unfortunately, the incorrect spelling also seems to have made it to the index.
4) Chapter 2, page 13: Emily Rosa’s experiment had a total of 28 therapists in her sample, not 21.
5) Chapter 3, Page 28. In the Rule of Five example the samples are 30, 60, 45, 80, and 60 minutes so the range should be 30 to 80- not 35 to 80.
6) Chapter 7: Page 91, Exhibit 7.3: In my writing, I had a habit of typing “*“ for multiplication since that is how it is used in Excel and most other spreadsheets. My instructions to my editor were to replace the asterisks with proper multiplication signs. They changed most of them throughout the book but the bottom of page 91 has several asterisks that were never changed to multiplication signs. Also, in step 4 there are asterisks next to the multiplication signs. This hasn’t seemed to confuse anyone I asked. People still correctly think the values are just being multiplied but might think the asterisk refers to a footnote (which it does not).
7) Chapter 10: Page 177-178: There is a error in the lower bound of a 90% confidence interval at the bottom of page 177 . I say that the range is “79% to 85%”. Actually, the 79% is the median of the range and the proper lower bound is 73%. On the next page I show an error in the column headings of Exhibit 10.4. I say that the second column is computed by subtracting one normdist() function in Excel from another. Actually, the order should be reversed so that the first term is subtracted from the first. As it is now, the formula would give a negative answer. Taking the negative of that number gives the correct value. I don’t think this should confuse most readers unless they try to recreate the detailed table (which I don’t expect most to do). Fortunately, the downloadable example spreadsheet referred to in this part of the book corrects that error. The correction is in the spreadsheet named Bayesian Inversion, Chapter 10 available in the downloads.
8) Chapter 12: page 205; There should be a period at the end of the first paragraph.
I’ve found errors in other books but you are the only author who posted them on a website as soon as the book came out. This ought to be the standard for how books deal with it. I was going to make a couple of errata entries but I see you already have them mentioned here. I’m sure you could probably find some statistics on average errors per book somewhere out there.
Bill:
9) Chapter 5, page 69. Third line from top of page – “He has has seen…” should be “He has seen…”. One has too many.
John Chandler-Pepelnjak :
Another one pretty close to your #4 above. Chapter 2, page 13. In my edition (don’t know if it is first print run or second), the half-width of your confidence interval is given as 16% and the resulting confidence interval is given as 44% to 66%. If Emily had 10 measurements on 28 therapists, the CI half-width should be 6%, giving a CI of 44% to 56%. So it looks like the 16% is a typo and the confidence interval has a typo. (If you use the corrected number of therapists then that half-width goes up to 7% and the CI is 43% to 57%.)
Thanks for writing this book. It’s an enjoyable read and an important message.
Thanks for your input and for your interest in my book. You’ve made an astute observation. I’ve had a couple of email conversations about this so your comment gives me an opportunity to summarize the point. Other than one minor caveat, you are correct.
First, it is important to point out that this partcular claim is simply about the chance of getting 44% of coin flips out of 280 on heads (i.e. about 123 out of 280) and not about the confidence interval for this particular study (which, of course, would have a mean of 44%). I only point that out because it confused a couple of other readers (apparently not you).
Of course, the relevant tool for this is the cumulative binomial distribution – specifically, 123 or fewer successes out of 280 trials with a 50% chance of success per trial. This comes out to 2.42% chance of anything up to and including that number of successes, which is close to the lower bound of a 95%CI. Likewise, every possibility up to and including 156 heads has a 97.58% chance. So we get pretty close to a 95% CI of 44% to 56%. Of course, this (combinatoric) distribution is meant to work with integers and there are some rounding issues. You can get small differences based on on whether you think the result of 123 or 156 heads should be included in the range or are jsut outside of the range (which I think might explain how you got a 7% half-width). But you are correct that there was definitely a typo.
Thanks,
Doug Hubbard
by Douglas Hubbard | Dec 8, 2008 | Computing Value of Information, How To Measure Anything Blogs, News
Originally posted at http://www.howtomeasureanything.com, on Monday, December 08, 2008 10:08:08 PM, by Unknown.
“I’m trying to quickly identify the items I need to spend time measuring. In your book, you determine what to measure by computing the value of information. You refer to a macro that you run on your spreadsheet that automatically computes the value of information and thus permits you to identify those items most worth spending extra time to refine their measurements. Once I list potential things that I might want to measure, do I estimate, using my Calibrated Estimators, a range for the chance of being wrong and the cost of being wrong and using something like @RISK, multiply these two lists of probability distributions together to arrive at a list of distributions for all the things I might want to measure? Then, do I look over this list of values of information and select the few that have significantly higher values of information?
I don’t want you to reveal your proprietary macro, but am I on the right track to determining what the value of information is?”
You were on track right up to the sentence that starts Once I list potential things that I might want to measure. You already have calibrated estimates by that point. Remember, you have to have calibrated estimates first before you even can compute the value of information. Once the value of information indicates that you need to measure something, then its time to get new observations. As I mention in the book, you could also use calibrated estimates for this second round, but only if you are giving them new information that will allow them to reduce their uncertainty.
So, first you have your original calibrated estimates, THEN you compute the value of information, THEN you measure the things that matter. In addition to using calibrated estimators again (assuming you are finding new data to give them to reduce their ranges) I mention several methods in the book including decomposition, sampling methods, controlled experiments, and several other items. It just depends on what you need to measure.
Also, the chance of being wrong and the cost of being wrong can already be computed from the original calibrated estimates you provided and the business case you put them in. You do not have to estimate them separately in addition to the original calibrated estimates themselves. Look at the examples I gave in the chapter on the value of information.
My macros make it more convenient to more complicated information values, but they are not necessary for the simplest examples. Did you see how I computed the information values in that chapter? I already had calibrated estimates on the measurements themselves. Try a particular example and ask me about that example specifically if you are still having problems.
Thanks for your use of my book and please stay in touch.
Doug Hubbard
by Douglas Hubbard | Dec 8, 2008 | How To Measure Anything Blogs, Measurement Questions From Readers, News
Originally posted at http://www.howtomeasureanything.com, on Monday, December 08, 2008 9:32:43 PM, by Amichai.
“The challenge is to come up with a simple, clearly defined number (or a few numbers) that reflect how well the organization is executing towards the goals from the CEO level to the line worker level. It would also be good to have an insight into what misalignment occurs between departments.
Most importantly, we should be able to track our alignment score overtime and track the impact of our improvements and corrective actions.
Any advice would be greatly appreciated!”
Thanks for your question. I mention strategic alignment just briefly in the book. As with all measurement questions, we start with What do you mean? . In this case, you offered perhaps part of the meaning – how well the organization is executing towards the goals from the CEO level to the line worker level. I would think that the rate it is approaching certain defined goals is itself measurable, especially if the goal is measurable. But perhaps some concrete examples could help us zero in more. Does alignment mean that the activities of each of these levels are contributing to the probability of reaching a goal at a stated time? Does it mean these activities are contributing toward reaching the goal faster? Perhaps the observable object is just the rate of completion of the goal itself. Isn’t alignment just another way to say performance given the way you are using it? If not, what is different?
If you are not just concerned about a % complete measure toward a particular goal then it seems to me that, in effect, you are trying to correlate certain observable input to the probability or rate of completion of stated goals – which, again, sounds a bit like performance. So what you are really trying to measure is whether each of some list of observable activities is improving the rate or probability of completion of a specific goal. Another issue you will have to consider is the tradeoff among multiple possibly competing goals. If you have more than one goal, you will have to collapse these goals into a single value using the utility curve approach I describe in the book or possibly simply monetizing the various goals. This way you can tell if your overall alignment has improved if you have increased satisfaction of one goal but slightly decreased another. Ultimately, it seems like you have a type of forecasting problem. You are asking Based on what I have seen so far (of some list of observations) are we getting closer to or further away from meeting Goal X? The score you are looking for should correlate with rate or probability of goal completion.
In addition to the questions I asked above, here are some more I have for you:
* Is you definition of alignment close to what I just talked about?
* What are some of the goals you are talking about?
* What will be the use of this measure, specifically?
* How much do you know now? What have you seen so far that causes you to think that some departments/individuals are more aligned than others?
If you can answer these questions then I think I can give you some very specific ideas.
Thanks,
Doug Hubbard
by Douglas Hubbard | Dec 8, 2008 | General Topics, How To Measure Anything Blogs, News
As I mentioned in the book, I often offer a sort of measurement challenge to an audience where I’m speaking about measuring the immeasurable. I ask anyone listening to give me an example of an impossible measurement at any time during the conference and, after a 15 minute session with that person, I’ll explain a method to measure that very thing. So far, I haven’t actually failed to identify a valid measurement in that time period but there were times it took longer to explain to the person that what I described actually would measure it.
Give me an example of a difficult measurement. Any example. The harder the better. And we’ll discuss the answer right here.