by Douglas Hubbard | Mar 7, 2010 | General Topics, How To Measure Anything Blogs, News
I’m reintroducing the Measurement Challenge for the blog. I ran it for a couple of years on the old site and had some very interesting posts.
Use this thread to post comments about the most difficult – or even apparently “impossible” – measurements you can imagine. I am looking for truly difficult problems that might take more than a couple of rounds of query/response to resolve. Give it your best shot!
Doug Hubbard
by Douglas Hubbard | Feb 24, 2010 | General Topics, How To Measure Anything Blogs, News, The Failure of Risk Management Blogs
I came across more interesting research about possible “placebo effects” in decision making. According to the two studies cited below, receiving formal training in lie detection (e.g. so that law enforcement officers are more likely to detect a untruthful statement by a suspect) has a curious effect. The training greatly increases confidence of the experts in their own judgments even though it may decrease their performance at detecting lies. Such placebo effects were a central topic of The Failure of Risk Management. I’m including this in the second edition of How to Measure Anything as another example of how some methods (like formal training) may seem to work and increase confidence of the users but, in reality, don’t work at all.
- DePaulo, B. M., Charlton, K., Cooper, H., Lindsay, J.J., Muhlenbruck, L. “The accuracy-confidence correlation in the detection of deception” Personality and Social Psychology Review, 1(4) pp 346-357, 1997
- Kassin, S.M., Fong, C.T. “I’m innocent!: Effect of training on judgments of truth and deception in the interrogation room” Law and Human Behavior, 23 pp 499-516, 1999
Thanks to my colleague Michael Gordon-Smith in Australia for sending me these citations.
Doug Hubbard
by Douglas Hubbard | Feb 20, 2010 | News, The Failure of Risk Management Blogs
I get many emails about the validity of Risk Maps, Risk Scores and Risk Matrices in risk analysis. I know there are a lot of passionate proponents of these methods (and many heavily-invested vendors) but there is still no evidence that such methods improve forecasts or decisions.
Read my skeptical look at many risk analysis methods in the OR/MS Today and Analytics articles (See the Articles page). I compiled quite a lot of research in these articles and even more in my book that should lead any manager to be suspicious about whether the perceived effectiveness of these methods is real or imagined.
I’ve updated the upcoming second edition of How to Measure Anything with even more research in that vein. Stay tuned and post your comments.
Doug Hubbard
by Douglas Hubbard | Feb 8, 2010 | General Topics, News
See the webinars page to sign up for intensive training sessions for Applied Information Economics. You can take the popular “Calibration” webinar or even sign up for the full AIE Module I & II training (required for AIE Level I certification). Each webinar is scheduled for multiple dates and times so that you are sure to find something that fits your schedule. Contact HDR for group rates.
by Douglas Hubbard | Nov 24, 2009 | General Topics, News
We have a new batch of the 4-page, color, laminated reference cards for How to Measure Anything. Some of the material is actually already updated for the upcoming second edition of the book (spring 2010). See the products page for details.
by Douglas Hubbard | Oct 28, 2009 | General Topics, News
The article I co-authored with Doug Samuelson in Analytics Magazine just came out with the fall issue. “Analysis Placebos: The Difference Between Perceived and Real Benefits of Risk Analysis and Decision Models.” explains why many popular analysis methods and models may have entirely illusory benefits. See the online version of the article at http://viewer.zmags.com/publication/2d674a63#/2d674a63/15