Watch Doug’s Talk at Cybersecurity Risk Seminar Hosted by Defense Acquisition University and NavalX

On February 5, 2020, Doug Hubbard participated in a panel about cybersecurity risk hosted by Defense Acquisition University and NavalX. You can watch him help acquisition professionals see cybersecurity in a whole new way in the video clip below.


Learn how to build better defenses against cyber attack – and reduce cybersecurity risk – with our two-hour How to Measure Anything in Cybersecurity Risk webinar. $150 – limited seating.




Measuring the wrong variables is a Trojan horse that infiltrates virtually every organization. This phenomenon has a real cost that can be measured – and avoided.


A measurement isn’t useless if the sample size is small. You can actually use small sample sizes to learn something useful about anything – and use that insight to make better decisions.

Five Data Points Can Clinch a Business Case

Any decision can be made better through better measurements – and as these three examples show, just five data points can tip the scales in a business decision.


7 Simple Principles for Measuring Anything

measuring anything

If you want to make better decisions, using data to inform your decision-making is the way to go. Organizations largely understand this reality in principle, if not always in practice. But it’s not enough to just gather data; you have to know how to use it, and unfortunately, most organizations don’t.

Fortunately, quantitative analysis doesn’t have to be that difficult. We’ve found over the decades that much of what interferes with data-driven decision-making is confusion, misunderstanding, and misconceptions about measurement in general. As our work with leading public and private entities across the globe has proven, these obstacles can be overcome. Anything can be measured, and if you can measure it, you can improve your decisions.

Below are seven simple principles for making better measurements that form the basis of our methodology, Applied Information Economics (AIE), and – if applied – can quickly improve the quality of the decisions you make (Figure 1).

Figure 1: 7 Simple Rules for Measuring Anything – Hubbard Decision Research

7 Simple Rules for Measuring Anything – Hubbard Decision Research


Rule 1: If It Matters, It Can Be Measured

Nothing is impossible to measure. We’ve measured concepts that people thought were immeasurable, like customer/employee satisfaction, brand value and customer experience, reputation risk from a data breach, the chances and impact of a famine, and even how a director or actor impacts the box office performance of a movie. If you think something is immeasurable, it’s because you’re thinking about it the wrong way.

Put simply:

  1. If it matters, it can be observed or detected.
  2. If it can be detected, then we can detect it as an amount or in a range of possible amounts.
  3. If it can be detected as a range of possible amounts, we can measure it.

If you can measure it, you can then tell yourself something you didn’t know before. Case in point: the Office of Naval Research and the United States Marine Corps wanted to forecast how much fuel would be needed in the field. There was a lot of uncertainty about how much would needed, though, especially considering fuel stores had to be planned 60 days in advance. The method they used to create these forecasts was based on previous experience, educated assumptions, and the reality that if they were short on fuel, lives could be lost. So, they had the habit of sending far more fuel to the front than they needed, which meant more fuel depots, more convoys, and more Marines in harm’s way.

After we conducted the first round of measurements, we found something that surprised the Marines: the biggest single factor in forecasting fuel use was if the roads the convoys took were paved or gravel (figure 1).

Figure 1: Average Changes in Fuel

We also found that the Marines were measuring variables that provided a lot less value. More on that later.

Rule 2: You Have More Data Than You Think

Many people fall victim to the belief that to make better decisions, you need more data – and if you don’t have enough data, then you can’t and shouldn’t measure something.

But you actually have more data than you think. Whatever you’re measuring has probably been measured before. And, you have historical data that can be useful, even if you think it’s not enough.

It’s all about asking the right questions, questions like;

  • What can email traffic tell us about teamwork?
  • What can website behavior tell us about engagement?
  • What can publicly available data about other companies tell us about our own?


Rule 3: You Need Less Data Than You Think

Despite what the Big Data era may have led you to believe, you don’t need troves and troves of data to reduce uncertainty and improve your decision. Even small amounts of data can be useful. Remember, if you know almost nothing, almost anything will tell you something.

Rule 4: We Measure to Reduce Uncertainty

One major obstacle to better quantitative analysis is a profound misconception of measurement. For that, we can blame science, or at least how science is portrayed to the public at large. To most people, measurement should result in an exact value, like the precise amount of liquid in a beaker, or a specific number of this, that, or the other.

In reality, though, measurements don’t have to be that precise to be useful. The key purpose of measurement is to reduce uncertainty. Even marginal reductions in uncertainty can be incredibly valuable.

Rule 5: What You Measure Most May Matter Least

What if you already make measurements? Let’s say you collect data and have some system – even an empirical, statistics-based quantitative method – to make measurements. Your effectiveness may be severely hampered by measuring things that, at the end of the day, don’t really matter.

By “don’t really matter,” we mean that the value of these measurements – the insight they give you – is low because the variable isn’t very important or isn’t worth the time and effort it took to measure it.

What we’ve found is a bit unsettling to data scientists: some of the things organizations currently measure are largely irrelevant or outright useless – and can even be misleading. This principle represents a phenomenon called “measurement inversion” (Figure 2):

measurement problems

Figure 2: Measurement Inversion – Hubbard Decision Research

Unfortunately, it’s a very common phenomenon. The Marine Corps fell victim to it with their fuel forecasts. They thought one of the main predictors of fuel use for combat vehicles like tanks and Light Armored Vehicles (LAV) was the chance that they would make contact with the enemy. This makes sense; tanks burn more fuel in combat because they move around a lot more, and their gas mileage is terrible. In reality, though, that variable didn’t move the needle that much.

In fact, the more valuable predictive factor was whether or not the combat vehicle had been in a specific area beforeIt turns out that vehicle commanders, when maneuvering in an uncertain area (i.e. landmarks, routes, and conditions in that area they had never encountered before), tend to keep their engines running for a variety of reasons. That burns fuel.

(We also discovered that combat vehicle fuel consumption wasn’t as much of a factor as convoy vehicles, like fuel trucks, because there were less than 60 tanks in the unit we analyzed. There were over 2,300 non-combat vehicles.)

Rule 6: What You Measure Least May Matter Most

Much of the value we’ve generated for our clients has come through measuring variables that the client thought were either irrelevant or too difficult to measure. But often, these variables provide far more value than anyone thought!

Fortunately, you can learn how to measure what matters the most. The chart below demonstrates some of the experiences we’ve had with combating this phenomenon with our clients (Figure 3):

measurement inversion examples

Figure 3: Real Examples of Measurement Inversion – Hubbard Decision Research

This isn’t to say that the variables you’re measuring now are “bad.” What we’re saying is that uncertainty about how “good” or “bad” a variable is (i.e. how much value they have for the predictive power of the model) is one of the biggest sources of error in a model. In other words, if you don’t know how valuable a variable is, you may be making a measurement you shouldn’t – or may be missing out on making a measurement you should.

Rule 7: You Don’t Have to Beat the Bear

When making measurements, the best thing to remember is this: If you and your friends are camping, and suddenly a bear attacks and starts chasing you, you don’t have to outrun the bear – you only have to outrun the slowest person.

In other words, the quantitative method you use to make measurements and decisions only has to beat the alternative. Any empirical method you incorporate into your process can improve it if it provides more practical and accurate insight than what you were doing before.

The bottom line is simple: Measurement is a process, not an outcome. It doesn’t have to be perfect, just better than what you use now. Perfection, after all, is the perfect enemy of progress.

Incorporate these seven principles into your decision-making process and you’re already on your way to better outcomes.


Learn how to start measuring variables the right way – and create better outcomes – with our two-hour Introduction to Applied Information Economics: The Need for Better Measurements webinar. $100 – limited seating.




Commercial real estate modeling is becoming more widespread, but is still limited in several crucial  ways.


Is the number of people infected by Covid-19 being underreported? Here’s why we think that may be the case – to a significant degree.





Decisions aren’t made in a vacuum. They’re made in real life, and the best way to simulate the randomness in real life is through this powerful tool.

Trojan Horse: How a Phenomenon Called Measurement Inversion Can Massively Cost Your Company

measurement inversion


  • A Trojan horse is anything that introduces risk to an organization through something that appears to be positive
  • Measuring the wrong variables is a Trojan horse that infiltrates virtually every organization
  • This phenomenon has a real cost that can be measured – and avoided

The Trojans stood at the walls, drunk from victory celebrations after they had previously watched the Greek fleets set sail away in retreat, having been defeated after nearly 10 years of constant warfare. They had little reason to suspect treachery when they saw the massive wooden horse just outside their gates, apparently a gift offering from the defeated Greeks. Because of their confidence – or overconfidence – they opened the gates and claimed the wooden horse as the spoils of war.

Later that night, after the Trojans lay in drunken  stupor throughout the city, a force of Greek soldiers hidden in the horse emerged and opened the gates to the Greek army that had not retreated but had actually lay in wait just beyond sight of the city. Swords drawn and spears hefted, the Greek soldiers spread throughout the city and descended upon its people.

The end result is something any reader of The Illiad knows well: the inhabitants of Troy were slaughtered or sold into slavery, the city was razed to the ground, and the term “Trojan horse” became notorious for something deceitful and dangerous hiding as something innocuous and good.

Organizations are wising up to the fact that quantitative analysis is a vital part of making better decisions. Quantitative analysis can even seem like a gift, and used properly, it can be. However, the act of measuring and analyzing something can, in and of itself, introduce error – something Doug Hubbard calls the analysis placebo. Put another way, merely quantifying a concept and subjecting the data to an analytical process doesn’t mean you’re going to get better insights.

It’s not just what data you use, although that’s important. It’s not even how you make the measurements, which is also important. The easiest way to introduce error into your process is to measure the wrong things – and if you do, you’re bringing a Trojan horse into your decision-making.

Put another way, the problem is an insidious one: what you’re measuring may not matter at all, and may just be luring you into a false sense of security based on erroneous conclusions.

The One Phenomenon Every Quantitative Analyst Should Fear

Over the past 20 years and throughout over 100 measurement projects, we’ve found a peculiar and pervasive phenomenon: that what organizations tend to matter the most often matters the least – and what they aren’t measuring tends to matter the most. This phenomenon is what we call measurement inversion, and it’s best demonstrated by the following image of a typical large software development project (Figure 1):

measurement problems

Figure 1: Measurement Inversion

Some examples of measurement inversion we’ve discovered are shown below (Figure 2):

measurement inversion examples

Figure 2: Real Examples of Measurement Inversion

There are many reasons for measurement inversion, ranging from the innate inconsistency and overconfidence in subjective human assessment to organizational inertia where we measure what we’ve always measured, or what “best practices” say we should measure. Regardless of the reason, every decision-maker should know one, vital reality: measurement inversion can be incredibly costly.

Calculating the Cost of Measurement Inversion for Your Company

The Trojan horse cost Troy everything. That probably won’t be the case for your organization, as far as one measurement goes. But there is a cost to introducing error into your analysis process, and that cost can be calculated like anything else.

We uncover the value of each piece of information with a process appropriately named Value of Information Analysis (VIA). VIA is based on the simple yet profound premise that each thing we decide to measure comes with a cost and an expected value, just like the decisions these measurements are intended to inform. Put another way, as Doug says in How to Measure Anything, “Knowing the value of the measurement affects how we might measure something or even whether we need to measure it at all.” VIA is designed to determine this value, with the theory that choosing higher-value measurements should lead to higher-value decisions.

Over time, Doug has uncovered some surprising revelations using this method:

  • Most of the variables used in a typical model have an information value of zero
  • The variables with the highest information value were usually never measured
  • The most measured variables had low to no value.

The lower the information value of your variables, the less value you’ll generate from your model. But how does this translate into costs?

A model can calculate what we call your Overall Expected Opportunity Loss (EOL), or the average of each expected outcome that could happen as a result of your current decision, without measuring any further. We want to get the EOL as close to zero as possible. Each decision we make can either grow the EOL or shrink it. And each variable we measure can influence those decisions. Ergo, what we measure impacts our expected loss, for better or for worse.

If the variables you’re measuring have a low information value – or an information value of zero – you’ll waste resources measuring them and do little to nothing to reduce your EOL. The cost of error, then, is the difference between your EOL with these low-value variables and the EOL with more-valuable variables.

Case in point: Doug performed a VIA for an organization called CGIAR. You can read the full case study in How to Measure Anything, but the gist of the experience is this: by measuring the right variables, the model was able to reduce the EOL for a specific decision – in this case, a water management system – from $24 million to under $4 million. That’s a reduction of 85%.

Put another way, if they had measured the wrong variables, then they would’ve incurred a possible cost of $20 million, or 85% of the value of the decision.

The bottom line is simple. Measurement inversion comes with a real cost for your business, one that can be calculated. This raises important questions that every decision-maker needs to answer for every decision:

  1. Are we measuring the right things?
  2. How do we know if we are?
  3. What is the cost if we aren’t?

If you can answer these questions, and get on the right path toward better quantitative analysis, you can be more like the victorious Greeks – and less like the citizens of a city that no longer exists, all because what they thought was a gift was the terrible vehicle of their destruction. 


Learn how to start measuring variables the right way – and create better outcomes – with our two-hour Introduction to Applied Information Economics: The Need for Better Measurements webinar. $100 – limited seating.


These Two Problems Are Why Innovation Initiatives Fail

innovation problems


  • The most common reason organizations fail at innovating might not be what you think
  • Defining what innovation means to an organization is critical for success
  • Even the best-defined innovation initiatives fail if they aren’t properly measured

Innovation has captured the imagination of business leaders everywhere. Everyone wants to create that ground-breaking, disruptive product or service that turns the industry on its head. Or, at the very least, they want to move toward changing anything – paradigms, processes, markets, you name it – that will take the organization to that next level.

Unfortunately, many innovation initiatives fail, a failure being an outcome which is disproportionately smaller than expectations, allocated resources, or both. They happen quite commonly, and business literature is rife with reasons why, ranging from not having ideas that are “good” to not having the right people, enough budget, or an accepting culture.

The main reason an innovation initiative fails, however, is more fundamental: companies aren’t doing a good enough job defining what innovation actually means and then measuring it.

The Innovation Definition Problem

Decision-makers that this very moment are spending a staggeringly-high percentage of their revenue trying to innovate often don’t have a firm definition of what innovation means – and this isn’t just academic. 

Innovation is a vague, mostly meaningless term that obscures what you’re really trying to accomplish. It can mean almost anything. Everyone has a definition; here’s 15 of them.

What we’ve found when faced with these terms (and there are a lot of them) is that either the decision-makers know what they want to accomplish, but they don’t know how to measure it, or they think they know what they want to accomplish, but they’re measuring the wrong things (or even pursuing the wrong goal).

So, how do you define the problem? When organizations want to innovate, they’re largely looking to do something that they couldn’t previously do, for the purpose of taking the company to a level it couldn’t previously reach.

How they achieve this is largely a function of two things: impact and timeliness. The earlier a company undertakes an initiative, and the more impact that initiative has, the more innovative a company will be, as shown by the Innovation Time/Impact Square in figure 1:

innovation measurement

Figure 1 – Innovation Time/Impact Square

In our experience, the companies that find the greatest success compared to their peers come up with better ideas – impact – and do so usually before anyone else – timeliness. If your products or services are producing a lot of return, but you are behind the curve, you’re mainly just catching up by implementing best practices. If you’re getting there first, but your concepts aren’t particularly high-value, then you’re underachieving given your abilities. And if your concepts are low value and based only on what others have done before you, you’re falling behind.

Any of the three states of being may be acceptable, but none are examples of innovation. 

What does “impact” mean? One way to define it is to select an objective – revenue growth, higher stock price, more sales, greater market share, or some other desired outcome – and determine the growth target. 

Of course, no organization is going to spend a significant portion of their budget on merely adding a percent or two to their growth, not if they can help it. The allure of innovation is substantial, paradigm-changing growth.

What that growth looks like specifically depends on the firm, but the reality is simple: spending significant resources on innovation – a difficult and costly process – needs to be worth it.

“Timeliness” can mean a variety of things as well. Increasing the quantity of product concepts produced over a given period of time is one definition. Identifying new trends before anyone else is another. Focusing on speeding up the pace at which you create does have value in and of itself, but investing too much in accomplishing this goal can result in lower overall return.

Framing innovation in this way gives you the basis to make better decisions on anything from how much to increase the R&D budget to who you need to hire, what technology you need to acquire, or what you need to do to improve the quality of the ideas your organization creates.

Once you’ve defined what innovation means to your organization, you then have to measure it.

The Innovation Measurement Problem

The innovation measurement problem is simple: companies, by and large, don’t know how to measure this concept. In practice, this means most firms can’t:

  1. Evaluate how “good” they are at innovation, whatever that means
  2. Figure out what it takes to get “better” at innovation, whatever that looks like
  3. Determine the cost of doing those things to get “better” and forecasting ROI

innovation measurement

The first major attempt to accomplish the first task came in 1976, when Michael Kirton produced a paper identifying the two types of creators: adaptors (those who make existing processes better) and innovators (those who do something different). From this effort came the Kirton Adaption-Innovation (KAI) Inventory, which basically provides a person with where he or she falls on this adaption-innovation continuum.

The effort is a noble one, but we don’t have any sense of objective value. Are people at the Innovation end of the scale better at growing a company than the ones at the other end, and if so, by how much? 

These kinds of problems aren’t specific to just the KAI inventory; they’re found in almost every attempt to quantify the processes, impacts, and probability of innovation initiatives. 

For example, some organizations also use what academics call semi-quantitative measures (we call them pseudo-quantitative ones) like the “innovation Balanced Scorecard” promoted in 2015, and the “Innovation Audit Scorecard,” promoted in 2005. The flaws of these particular methods are explained in How to Measure Anything; they include the following:

  • Ranges of values on scorecards are largely arbitrary;
  • Weighting scores is also arbitrary (i.e. how do you know this component, weighted at 15%, is twice as important as one weighted 7.5% Are those individual values accurate?);
  • Estimates are usually subjective and uncalibrated, even from experts;
  • It’s impossible to perform meaningful mathematical operations with ordinal scales (i.e. is something that is a 4 really twice as effective as something that’s a 2?)
  • They don’t incorporate probabilities of outcomes; and
  • Using one gives you the illusion of improving decision-making, even though doing so may actually introduce error (a concept called the analysis placebo)

McKinsey, to its credit, promotes two quantitative metrics to evaluate effectiveness of R&D expenditures (ratio of R&D spending to new product sales and product-to-margin conversion rate), but even this approach doesn’t speak to whether or not the innovation problem lies within R&D – or if focusing on improving these two metrics is the best action to take

Plus, R&D is only one way a company can innovate, per how we defined the concept above, and it doesn’t exist in a vacuum; it is accompanied by a host of other factors and strategies.

There’s a bigger problem, though, with measuring innovation: even if you come up with “good” metrics, no one tells you which “good” metrics have the most predictive power for your organization. In other words, each variable, each measurement, each bit of information you gather has a certain value. The vast majority of the time, organizations have no idea what the value of these pieces of information are – which leads to them measuring what is easy and simple and intuitive rather than what should be measured.

For example, one common metric that is bandied about is the number of patents created in a certain period of time (i.e. a quarter, or a year). Refer back to the Innovation Time/Impact Square above. More patents increase the chance that you’ll get there first, right? Maybe – but that may not make you more innovative. What if you modeled your creative process and actually estimated the market potential of an idea before you developed and patented it and found that your ideas, as it turns out, have consistently low market potential? Then your problem isn’t “How do we create more ideas?”; it’s “How do we create better ideas?”

It doesn’t matter if you know you need to create ideas that are more timely and more impactful if you can’t measure either. You won’t be able to make the best decisions, which will keep your organization out of the rarified air of the innovators.

The bottom line: focusing on defining innovation for your firm and creating better measurements based on those definitions is the only proven way to improve innovation. 


Learn how to start measuring innovation the right way – and create better outcomes – with our two-hour How to Measure Anything in Innovation webinar. $150 – limited seating.


What the Manhattan Project and James Bond Have in Common – and Why Every Analyst Needs to Know It

monte carlo simulation


  • A powerful quantitative analysis method was created as a result of the Manhattan Project and named for an exotic casino popularized by the James Bond series
  • The tool is the most practical and efficient way of simulating thousands of scenarios and calculating the most likely outcomes
  • Unlike other methods, this tool incorporates randomness that is found in real-world decisions
  • Using this method doesn’t require sophisticated software or advanced training; any organization can learn how to use it

A nuclear physicist, a dashing British spy, and a quantitative analyst walk into a casino. This sounds like the opening of a bad joke, except what all of these people have in common can be used to create better decisions in any field by leveraging the power of probability.

The link in question – that common thread – gets its name from an exotic locale on the Mediterranean, or, specifically, a casino. James Bond visited a venue inspired by it in Casino Royale, a book written by Ian Fleming, who – before he was a best-selling author – served in the British Naval Intelligence Division in World War II. While Fleming was crafting creative plans to steal intel from Nazi Germany, a group of nuclear physicists on the other side of the Atlantic were crafting plans of their own: to unleash the awesome destructive power of nuclear fission and create a war-ending bomb.

Trying to predict the most likely outcome during a theoretical nuclear fission reaction was difficult to say the least, particularly using analog computers. To over-simplify the challenge, scientists had to be able to calculate whether or not the bomb they were building would explode – a calculation that required an integral equation to somehow predict the behavior of atoms in a chain reaction. Mathematicians Stanislaw Ulam and John Von Nuemann, both members of the Manhattan Project, created a way to calculate and model the sum of thousands of variables (achieved by literally placing a small army of smart women in a room and having them run countless calculations). When they wanted to put a name to this method, Ulam recommended the name of the casino where his uncle routinely gambled away large sums of money<fn>Metropolis, N. (1987). The Beginning of the Monte Carlo Method. Los Alamos Science, 125-130. Retrieved from</fn>.

That casino – the one Fleming’s James Bond would popularize and the one where Ulam’s uncle’s gambling addiction took hold – was in Monte Carlo, and thus the Monte Carlo simulation was born.

Now, the Monte Carlo simulation is one of the most powerful tools a quantitative analyst can use when incorporating the power of probabilistic thinking into decision models.

How a Monte Carlo Simulation Works – and Why We Need It To

In making decisions – from how to make a fission bomb to figuring out a wager in a table game in a casino – uncertainty abounds. Uncertainty abounds because, put simply, a lot of different things can happen. There can be almost-countless scenarios for each decision, and the more variables and measurements are involved, the more complicated the calculations become to try and figure out what’s most likely to happen.

If you can reduce possible outcomes to a range of probabilities, you can make better decisions in theory. The problem is, doing so is very difficult without the right tools. The Monte Carlo simulation was designed to address that problem and provide a way to calculate the probability of thousands of potential outcomes through sheer brute force.

Doug Hubbard provides a scenario in The Failure of Risk Management that explains how a Monte Carlo simulation works and can be applied to a business case (in this context, figuring out the ROI of a new piece of equipment). Assume that you’re a manager considering the potential value of a new widget-making machine. You perform a basic cost-benefit analysis and estimate that the new machine will make one million widgets, delivering $2 of profit per unit. The machine can make up to 1.25 million, but you’re being conservative and think it’ll operate at 80% capacity on average. We don’t know the exact amount of demand. We could be off by as much as 750,000 widgets per year, above or below.

We can conceptualize the uncertainty we have like so:

  • Demand: 250,000 to 1.75 million widgets per year
  • Profit per widget: $1.50 to $2.50

We’ll say these numbers fall into a 90% confidence interval with a normal distribution. There are a lot of possible outcomes, to put it mildly (and this is a pretty simple business case). Which are the most likely? In the book, Doug used an MC simulation to run 10,000 simulations – or 10,000 scenarios – and tallied the results for each (with each scenario representing some combination of demand and profit per widget to create a loss or gain). The results are described by two figures: a histogram of outcomes (figure 1) and a cumulative probability chart (figure 2)<fn>Hubbard, D. W. (2009). The failure of risk management: Why its broken and how to fix it. Hoboken, NJ: J. Wiley & Sons.</fn>:

Figure 1: Histogram of Outcomes

Figure 2: Cumulative Probability Chart

You, the manager, would ideally then calculate your risk tolerance and use this data to create a loss exceedance curve, but that’s another story for another day. As Doug explains, using the MC simulation allowed you to gain critical insight that otherwise would’ve been difficult to impossible to obtain:

Without this simulation, it would have been very difficult for anyone other than mathematical savants to assess the risk in probabilistic terms. Imagine how difficult it would be in a more realistically complex situation.

The best way to sum up the diverse benefits of incorporating MC simulations into decision models was written by a group of researchers in an article titled “Why the Monte Carlo method is so important today”:<fn>Kroese DPBrereton TTaimre TBotev ZIWhy the Monte Carlo method is so important todayWiley Interdisciplinary Reviews: Computational Statistics 201466): 386– 392.</fn>

  • Easy and Efficient. Monte Carlo algorithms tend to be simple, flexible, and scalable.
  • Randomness as a Strength. The inherent randomness of the MCM is not only essential for the simulation of real-life random systems, it is also of great benefit for deterministic numerical computation.
  • Insight into Randomness. The MCM has great didactic value as a vehicle for exploring and understanding the behavior of random systems and data. Indeed we feel that an essential ingredient for properly understanding probability and statistics is to actually carry out random experiments on a computer and observe the outcomes of these experiments — that is, to use Monte Carlo simulation.
  • Theoretical Justification. There is a vast (and rapidly growing) body of mathematical and statistical knowledge underpinning Monte Carlo techniques, allowing, for example, precise statements on the accuracy of a given Monte Carlo estimator (for example, square-root convergence) or the efficiency of Monte Carlo algorithms.

Summarized, Monte Carlo simulations are easy to use, not only help you more closely replicate real-life randomness but understand randomness itself, and are backed by scientific research and evidence as to how they make decision models more accurate. We need them to work because any significant real-world decision comes with a staggering amount of uncertainty, complicated by thousands of potential outcomes created by myriad combinations of variables and distributions – all with an eminently-frustrating amount of randomness haphazardly mixed throughout.

How to Best Use a Monte Carlo Simulation

Of course, knowing that an MC simulation tool is important – even necessary – is one thing. Putting it into practice is another.

The bad news is that merely using the tool doesn’t insulate you from a veritable rogue’s gallery of factors that lead to bad decisions, ranging from overconfidence to using uncalibrated subjective estimates, falling victim to logical fallacies, and making use of soft-scoring methods, risk matrices, and other pseudo-quantitative methods that aren’t better than chance and frequently worse.

The good news is that all of those barriers to better decisions can be overcome. Another piece of good news: you don’t need sophisticated software to run a Monte Carlo simulation. You don’t even need specialized training. Many of the clients we train in our quantitative methodology don’t have either. You can actually build a functional MC simulation in native Microsoft Excel. Even a basic version can help by giving you more insight than you know now; by giving you another proven way to glean actionable knowledge from your data.

On its own, though, a MC simulation isn’t enough. The best use of the Monte Carlo method is to incorporate it into a decision model. The best decision models employ proven quantitative methods – including but not limited to Monte Carlo simulations – to follow the process below (figure 3):

decision analysis process

Figure 3: HDR Decision Analysis Process

The outputs of a Monte Carlo simulation are typically shown in that last step, when the model’s outputs can be used to “determine optimal choice,” or, figure out the best thing to do. And again, you don’t need specialized software to produce a working decision model; Microsoft Excel is all you need.

You may not be creating a fearsome weapon, or out-scheming villains at the baccarat table, but your decisions are important enough to make using the best scientific methods available. Incorporate simulations into your model and you’ll make better decisions than you did before – decisions a nuclear physicist or a secret agent would admire.

Recommended Products:

Learn how to create Monte Carlo simulations in Excel with our Basic Simulations in Excel webinar. Register today to get 15% off with coupon code EXCEL15.

Subscribe To Our Newsletter

Subscribe to get the latest news, insights, courses, discounts, downloads, and more - all delivered straight to your inbox. 

You are now subscribed to Hubbard Decision Research.