Yes, IT seems to stump a lot of people that try to measure its value. But the methods for measuring value don’t have to be that difficult. First, my readers will know that measuring the value means reducing your prior uncertainty about the value. My readers also know that the more your initial state of uncertainty, the more a few observations will tell you.
Contribute specific IT valuation problems here and we’ll talk about how to measure it!
Doug Hubbard
I really enjoyed both your books and understand what you advocate in both (although in HTMA the math gets a bit heavy for me later). In terms of practice however, I would say I’m in my consciously competent phase of my learning, where I know what I have to do, but this stuff doesn’t come naturally yet.
I spend a lot of time with collaborative tools like SharePoint. I went so far as writing a series of posts on ROI (NPV, IRR) for SharePoint nerds before I read your books. When you say “we do this for a positive difference – tell me what the difference is” – I fully get that and have used it since to get people to understand governance topics as the metaphor works well there too.
So let’s do a typical collaboration scenario. The organisation wishes to “Collaborate better”. A typical high level strategic plan might have some aspirational goals like “All staff have access via the intranet to the information and tools they need to perform their duties effectively, efficiently, and in accordance with corporate policies and procedures… will be a valuable internal communications tool, and delivers benefits in terms of whole-of-department shared understanding and evolution of our corporate culture.”
Okay, so what is the difference between now and the aspirational future state?
Messed up file system … creating
Duplication and rework … because of
Findability issues,
Wrong information,
Not knowing which version of information is authoritative, … which exacerbates the problems of
Information and organisational silos … which make it diffcult to
Communicate and execute strategy
Part of this problem is not to much the tool, as it is culture and behavioural/learning styles.
Now things get to the tougher bit where you move from a nice feel-good aspirational future state to quantifying where we are now to eventually break it down to a monte carlo series of discounted future cash flows.
How will we know if we have acheived the future state? How much should we spend attaining it? How do we account for user resistence?
The SharePoint offers a range of cabailities to address the issues above. Not sure if you need to know them but I will send more detail through if you want. Insofar as costs are concerned, you can see my deliberately simplified ROI scenario for collab here:
http://www.cleverworkarounds.com/2007/11/28/learn-to-talk-to-your-cfo-collaboration-scenario-part-3/ (*blush*, I had never read your book when doing this 🙂
SharePoint is often a catalyst for organiational/cultural change with the misguided notion that a tool alone will reduce the ‘chaos’ that causes projects to fail. I am not sure how I would measure organisational culture change, but for now I expect user resistence or lack of buy-in to be around 10-45% for this scenario.
What I would be interested in is the sorts of questions you would ask for the above situation and the sort of metrics you would try and identify based on the above. This would be great guidance for me to help me get more fluid with this way of working.
Additionally as a side note, it would be interesting to quantify the value of something like Twitter, and whether it should be blocked or not. I’d be interested in hearing you “think out loud” about that one.
kind regards
Paul Culmsee
http://www.cleverworkarounds.com
Paul,
Thanks for your detailed post and please excuse my delay in response. I first looked at it a couple of days ago but it seemed so involved I had to wait until I had time to respond.
As in all measurement problems we ask:
What do really really mean by ______? (collaboration, cultural change, etc.)
How would you observe it? (If it matters in any way, it has consequences that are observable)
Why do you care? In other words, what decisions could be made differently if you knew the answer?
What is your current state of uncertainty and what is the value of additional information?
To answer how to measure organization change, for example, we have to figure out what it is, and how you observe it (i.e. list examples of what you see differently when organizational change increases or decreases). This only seems difficult to measure because people tend to use this concept so ambiguously. Ask yourself why we should care about this. Is it really “organizational change” we care about or something more specific? (I can imagine some highly undesirable organizational changes). What specific undesirable behaviors are you hoping to turn into desirable behaviors? What makes them desirable/undesirable, exactly? Once you decide what behaviors “organizational change” is suppose to correct/improve then its merely a matter of observing these behaviors to measure them.
I also add at the end of this process that you need to quantify your current state of uncertainty about these behaviors (once you define what they are) and compute the value of additional information. This is critical to guiding any measurement effort. Defining the current state of uncertainty (as I explain in the book) is necessary to compute the value of information. And knowing the value of information often makes all the difference in the world for figuring out what to measure and how (because we would probably find that the highest value measurements are, as usual, a surprise).
Twitter, like anything else, has an impact on observed behavior and, consequently, the outputs of these behaviors. Just run the through the questions I ask above and see what you come up with (I already have some ideas in mind but I wanted to give you a chance to work through it, first). Remember, measuring human behavior is not at all new. Quantifying how our environment (including technology) affects our behaviors and outputs has several well-worn methods. For starters, if you think Twitter adds or subtracts from productivity, have you tried surveying how much time people spend on Twitter and seeing if there is any correlation between Twitter use and output? Just a thought.
Thanks for your post.
Doug Hubbard
Hi Again
Asking the question “What do really really mean by ______? ” goes to the root cause of many a project failure in the first place and is particularly painful for wicked problems. Stakeholders have not usually answered this question to a sufficient level and do not have the necessary level of shared understanding to commit to a course of action. I had a stakeholder ask me the difference between SharePoint and Skype once (1/3rd the way through an implementation), with the rationale that “I can *collaborate* with anyone in the world with Skype, why should I pay $200,000 for SharePoint?”.
I guess my question less around the theory, but more around validation from you or others who have modelled such a question. Asking a group “what do we really mean by collaboration” is a good one where it is deceptively hard to find an answer. If you ask this question to a group, you will probably get silence initially and you would need to prompt a little further. “Collaboration” is just a term which different people will interpret differently. I view terms like this as the means by which you get from a current state to a desirable future state. Governance, Well-being, Security are all similar examples.
(This is why I think that your book has a lot of value for “wicked problem” and “adaptive leadership” academics, because AIE is one of several tools that can be used to help people get closer to a state of shared understanding by framing things in terms of the min-expected and aspirational positive differences made.)
My problem is in the decomposition and i think that is where the art lies in AIE because I suspect decomposition in itself is subjective and is influenced by biases.
One of my anchor biases, is that I do read a lot and tend to make a lot of connections between various concepts and models (like your stuff and wicked problems as I just mentioned above). So when I start decomposing the aspirational goals: “All staff have access [snip] shared understanding and evolution of our corporate culture.”, I tend to see many components and sub-causes that each could be measurable. How far does one need to decompose? How do we know we have decomposed something realitically?
For example, I can decompose “All staff have access via the intranet to the information and tools they need to perform their duties effectively, efficiently, and in accordance with corporate policies and procedures” to:
How will we know staff are performing their duties effectively?
How will we know staff are performing their duties in accordance with corporate policies and procedures?
Next step: “What is “effective”? How do I observe it?
As you state in your book, that chances are, someone has already measured this before, so don’t reinvent the wheel. (hint hint!) So I do a bit of research into measuring employee efficiency and I find most of the things that I find here are complex in themselves or use examples that make sense for certain or market sectors.
At this point, one of the characteristics of wicked problems comes into play. “A wicked problem is always considered the symptom of another problem”, and “wicked problems have no stopping rule”. I could keep peeling this onion and end up simply confusing people. Is it any wonder why people pull back and revert to simpler, yet less effective methods?
regards
Paul
Thanks again for the post.
Keep in mind that after the first question I pose above (i.e “What do you mean by_____?”) the rest are a lot less philisophical and more to the point. No matter how complex the problem is or how stunned people might be by such questions, the next questions I asked above should help them work through it. If they just state what is observable about “collaboration” or “effective” and why they care, they are half way to measuring it. If they really can’t articulate why anyone would want more of X, then they don’t really have any reason to believe the benefit is positive. If they insist it is positive, just press the point and ask why.
After all, it may be the case that they are simply using “feel good” words that have no real value whatsoever. Business, after all, is full of consulting fluff words. This revelation, by itself, can be a very useful finding that might save a company a lot of time and money.
Doug Hubbard
Okay, I’ll ask the question a different way.
How does one validate that the chain of decomposition logic is valid or resilient? When I did the cleverworkarounds article, I used an example of staff members saving 5 minutes per day in improved efficiency. At the time I thought it was a crappy example but served to explain financial ROI to non financial people. The reason I did not like it was I started with a metric, “what if we saved 5 minutes per day?” and then played with the numbers. Maybe thats valid? But there is no real link between any broader goal. I felt when writing it that I was cheating 🙂
Now for example if I told someone that improved collaboration was “improved findability” and we then reasoned that findability would save 10-30 minutes a day per employee “finding stuff”, would you consider that sufficient to run the numbers? perhaps yes, especially if the initial results say the investment is overwhelmingly good.
But, this chain of decomposition is inherently qualitiative and the chain of logic itself is subject to the same biases that plague the estimates that go along with it. We calibrate estimates – how do we calibrate decomposition? I feel that like DCF, a poor assumption here could invalidate the model.
regards
Paul
Validation of any decision model is just about the same as how models are validated in physics. One approach is the logical/mathematical validation. The total of your groceries at the check-out is based on a “decomposition” of the items, applicable discounts, and taxes. Unless there is a math error, that decomposiion is correct because it is a matter of the rules of logic. Checking for obvious math errors systematically is a key part of the quality control of decision models. We can call this the audit of the model. It only looks at mathematical correctness, not empirical validation of estimates input into the model.
The other is the “component level” empirical validation. The inputs to a model are like the gears of a clock and one way to validate the overall model is to at least ensure the individual gears are sound. You asked if a range of “10-30 minutes per day of finding stuff” is “enough to run the numbers”. If it is a calibrated estimate from someone who has some knowledge of that issue then, yes. Remember, calibration of experts is itself a form a validation. Like one would with any scientific instrument, you first calibrate the human source of the estimate so that you know what their error is. Of course, if the information value justifies it, you might further reduce uncertainty about the decomposition by further empirical measurements. The point of the information value calculation is that we don’t waste resources validating what doesn’t need to be.
It is also worth pointing out the difference between calibrated estimates and “assumptions”. If you had to pick point estimates for uncertain values, that’s an assumption. But if you are calibrated by repeatedly demonstrating the ability to realistically assess odds and ranges, then the “10-30 minute” range is not really an assumption. If a calibrated person is highly uncertain about a value, their ranges and probabilities will indicate that. They never need to make an assumption that they know a value that they don’t. Testing assumptions is a much bigger problem for traditional point-value business cases than probabilistic methods.
Finally, there are modeling errors in both stochastic and deterministic models, of course, but that brings me to the last point. We need to keep in mind that validation of a decision model – just like in physics – is not really about what is “true”. It is about which approach outperforms another in terms of improved forecasts of actual outcomes. The research (e.g. Paul Meehl and others) show that those who decompose outperform those who don’t when it comes to estimating values without any additional levels of “validation” on the decomposition. There may be flawed assumptions in a decomposition, but apparently not decomposing at all results in even more error.
My main concern is that this validation should be a process that, itself, is economically justified according to the information values of the uncertainties. If the inputs are from calibrated experts, and if the model has been audited for obvious math errors, and if the information value is neglibable, then that means its “valid enough” and any further effort to improve decisions should be spent elsewhere. Yes, you could probably make a model better if you spent more time validating it. But there is a point of diminishing returns to that especially given that it is already a huge improvement compared to what would have been done otherwise.
Thanks,
Doug
I take all your points and the calibration of experts is the single most profound thing I took away from your books. I also realise after your explanations, that the purpose of your book is not to answer the question that I have.
I spend a lot of time performing the craft of dialogue mapping (www.cognexus.org) and I learned this craft because I wanted a tool/technique to get to the root of what most people would perhaps term ‘organisational chaos’. One of my early DM gigs was working with a group who had to determine a complex planning decision (freeways and the like). This was a public/private/community alliance with about every transport related government deparrment you can imagine involved, along with homeowners and private sector organisations, the greenies, the NIMBY residents, etc. Perhaps unsuprisingly, they could not get agreement on the *criteria* for their multi-criterial assessment. Thus, measurement was difficult when sections of the group did not endose that particular measure. (this is despite the alliance having a very clear aspirational goal and vision). They all decomposed it very differently to eachother and were not prepared to entertain altenrative viewpoints.
So although “experts” can be calibrated in terms of their range of answers to a given measurement question, they still may disagree considerably on the identification of, and importance of that measure. Perhaps a great appendix in a future book edition, or a download, is examples of decomposition across various disciplines and scenarios. It would provide a reference point for others following later.
Indicentally, when training dialogue mapping craft, one of my co-students mentioned AIE in passing and being a kindred spirit, we spoke at length about your methods to the class. He was learning dialogue mapping for the sort of reason I mention above – to help people gain enough shared understanding of the problem space that they can start to have the sort of valuable dialogue that would lead outcomes to measurements.
regards
Paul
Its good to hear about buzz regarding my work! My book sales have been very good and steady for two years so I was sure it was getting some good buzz, but its nice to get examples confirming that are nice.
You mention that experts may disagree considerably on the identification and importance of a measure. Actually, the one thing we don’t really have to count on the experts for is the importance of the measure. We compute that with the information valuation methods I discuss in the book (which I compute with macros but the approximations in the book are useful). Also, I find that experts disagree much less than you might expect if they are allowed to use ranges instead of points. People that would never agree on a point easily agree on ranges.
I do offer some example decompositions throughout the book and a few more exaples in Chapter 14. But even standardized models need to be audited where they are applied. And I want avoid an attempt at exhaustive examples because if this is a general solution (which I argue it is), then answers to any problem shoul be clear after working out a few examples. In the last two years, I’ve had just two emails from people who were disappointed that they didn’t see their own specific measurement problem discussed in detail in the book. One, I remember, was about measuring the value of diversity training. She apparently bought that book specifically to resolve that problem and, upon seeing that it wasn’t specifically mentioned, assumed that nothing in the book applied to that problem.
I know you don’t make this mistake, but let me take this opportunity to describe my response to queries like those two emails (I’m sure some more feel that way but didn’t email me about it). I responded that this was like expecting that a basic book on arithmatic should have the answer to 325 x 11,452 or that a physics book would list the power output of a specific engine. Of course, this book is about an approach that can be applied to anything and, like an equation in physics, if it can’t be applied to yet non-existing problems, then its not a general soluton. I encourage people to take whatever measurement problem they have and actually work through the steps in the book regardless of whether I mention their problem by name in one of the examples.
Thanks,
Doug