I came across more interesting research about possible “placebo effects” in decision making. According to the two studies cited below, receiving formal training in lie detection (e.g. so that law enforcement officers are more likely to detect a untruthful statement by a suspect) has a curious effect. The training greatly increases confidence of the experts in their own judgments even though it may decrease their performance at detecting lies. Such placebo effects were a central topic of The Failure of Risk Management. I’m including this in the second edition of How to Measure Anything as another example of how some methods (like formal training) may seem to work and increase confidence of the users but, in reality, don’t work at all.
- DePaulo, B. M., Charlton, K., Cooper, H., Lindsay, J.J., Muhlenbruck, L. “The accuracy-confidence correlation in the detection of deception” Personality and Social Psychology Review, 1(4) pp 346-357, 1997
- Kassin, S.M., Fong, C.T. “I’m innocent!: Effect of training on judgments of truth and deception in the interrogation room” Law and Human Behavior, 23 pp 499-516, 1999
Thanks to my colleague Michael Gordon-Smith in Australia for sending me these citations.
There is an interesting corollary here known as “overcompensating behavior.” I first hear of the term from an NHTSA official with respect to highway safety. He commented that people who drive “safe” vehicles, meaning vehicles in which passengers are more likely to survive a crash (with lesser or no injuries) tend to drive in a more reckless fashion than drivers in less “safe” vehicles (such as a small sedan) because they feel safer and less at risk than they might otherwise be. As a result, crashes and injuries can sometimes be higher in these “safer” vehicles. Think about it, your driving along in your little Honda in a blizzard, gingerly creeping along the highway at 35 mph, when a GM Denali races by at 65 mph. More than likely, the Denali will do no better skidding to a stop than will the Honda.
Yes that is interesting. This effect probably does have bearing on making decisions. We act with more confidence because we went through a “formal” process”.
I have a couple more examples. I recall 31 years ago when I was in the US Army Combat Engineer school as a young private. When we handled barbed “concertina” wire, we would do it with no gloves. Of course, gloves reduce the sense of touch but we were told that they also produce a false sense of security. The wire was so sharp the gloves wouldn’t help anyway.
When I wrote about this effect in Analytics Magazine (see the previous article post) my co-author and I talked about the analogy of a lost hiker in the winter drinking brandy to stay warm. It creates a sensation of warmth, but we know it is only because the alcohol causes capillaries in the skin to dialate and heat is traveling *out* of the skin faster. So if we care about survival, we don’t drink the brandy. If we care about feeling better, we drink the brandy.
I wonder how many popular management methods are like drinking the brandy.
Thanks for your post!
Along the lines of not using gloves with concertina wire I recall about ten years ago when back support was touted as being a “safety savior”. We then discovered our guys were lifting heavier items and still not using proper posture (even though trained) and so injuries, even though minor, went up. We took the braces away and had to seriously re-train and enforce the training to overcome.
Thanks for yet another interesting analogy to this problem of confidence in management decisions. If only the damage done by bad management decisions was limited to the managers’ hands and back!
This may have been a case of correlations that were not causes. Suppose we had a series of back injuries and as a part of recovery and prevention of recurring injuries doctors recommended or issued back braces. Then someone gets the idea “Hey, if that is helping them, maybe it will help everyone else?” Except for this – those that were originally issued the eqpt. because they had back injuries are now pre-disposed to take better care of their backs. So, a correlation mis-interpreted because of a lack of a control group.
A further effect of training is to provide tools and techniques that that the course student can then apply. In many cases this aids organisation consistency at the expense of operational freedom. In the case in point consistency means less room for a defence lawyer to attack the process. The reverse of this is the tool or technique user applies substantial concentration on the application and reduced thinking beyond the use of these tools. In the case in point, this means less attention to experience derived observations that may lead to different and more rewarding results.
A case of this ‘tools focus’ I find with second degree students in Project & Programme Management. As ‘project managers to be’ they commit early to developing a dissertaion plan – and based on their earlier education this is in Critical Path Analysis (CPA) format. I then send them away to re-investigate the assumptions underlying CPA which we then discuss ….. from which they then conclude CPA is a totally inappropriate tool and other planning tools are then sought and applied. If this tunnel vision ‘learning to application’ occurs in academic environments, what need we allow for in commercial and job-focused training.
When I have been reponsible for an organisations planning training, I have always sought to create situations where the taught technique fails, or requires special care, or needs to inform decision making outside the process that is then fed back into the process. The student is also made aware that continued futher learning is required to keep pushing their capability boundaries with the tool or technique. This to me is the only way to keep the ‘student’ alive when using the tool or technique in workplace application. This approach has delivered substantial commercial benefits in the orgaanisation that I originally developed the approach in.
To be clear, I never said that there is never any benefit of training. I said it is possible for training to merely improve confidence without improving performance. And I cite some published research that measured this effect.
I’m not really questioning your experience in this area, but for the sake of argument, let’s consider where you state that your approach has “substantial commercial benefits”. How was these benefits estimated? If it was based on the stated perceptions of people in the organization, I’m not surprised there is a percieved benefit. But, as I stated, research shows that there will be such perceptions even when there are no benefits. So was this based on an anecdote? In that case we run the risk that this was just the effect of a single random sample. The best way to show the commercial benefits of such an approach would be to show objectively measured effects over a number of samples. In each case where I claim that one methods is a significant improvement on intuition, I show the research that demostrates that claim by measuring the results in controlled experiments.
Thanks again for your input,
Can you comment on the possibility of conducting calibration remotely via electronic surveys? It would obviously require an iterative process, but if your experts are dispersed or have competing schedules…
I am very interested in engaging in this study. So now I am waiting for the next versions of these books. I can’t wait for them.