How to Measure Anything Book Sales Have Exceeded the 100,000 Mark

How to Measure Anything Book Sales Have Exceeded the 100,000 Mark

Over 12 years ago, on August 3, 2007, the first edition of How to Measure Anything was published.  Since then, Doug Hubbard has written two more editions that now have sold over 100,000 copies in that series alone – copies purchased by customers ranging from university professors, scientists, and government officials to Fortune 500 executives, Silicon Valley visionaries, and global leaders in virtually every industry under the sun.

The premise of How to Measure Anything remains true all these years later: that anything can be measured, and if you can measure it – and it matters – then you can make better decisions. These measurement challenges that we’ve overcome over the past 12 years have included:

  • the most likely rate of infection for COVID-19
  • drought resilience in the Horn of Africa
  • the value of “innovation” and “organizational development”
  • the risk of a mine flooding in Canada
  • the value of roads and schools in Haiti
  • the risk and return of developing drugs, medical devices and artificial organs
  • the value and risks of new businesses
  • the value of restoring the Kubuqi Desert in Inner Mongolia
  • the value and risks of upgrading a major electrical grid
  • near and long-term reputation damage from data breaches

In addition to learning how to better understand critical measurement concepts by reading How to Measure Anything, thousand of customers from all over the world have obtained proven, industry-leading quantitative training via our series of online webinars and in-person seminars. This includes the ever-growing line of How to Measure Anything-inspired webinars in key areas like:

  • Cybersecurity
  • Project Management
  • Innovation
  • Public Policy and Social Impact
  • Risk Management

More are on the way. (Do you have an area you’d like us to cover in our training? Tell us here.)

Through this series, they’ve become more calibrated and have learned how to calibrate others in their organizations. They’ve learned how to build their own Monte Carlo simulations in Excel from scratch – no special software needed. They’ve learned how to move away from pseudo-quantitative methods like risk matrices and heat maps in cybersecurity. And they’ve learned how to figure out what’s worth measuring by performing value of information calculations. These are just a few examples of the practical skills and takeaways our customers have received since How to Measure Anything was first published.

We’re also planning on taking quantitative training to the next level in an exciting and ground-breaking development that will be announced later this year.

The era of measurement – the pursuit of modern science – began in 1687 with Newton’s Principia. Finding better ways to create better measurements is the logical next step in the evolution we’ve seen over the past four centuries, and How to Measure Anything will continue to do its part as the world continues further down an uncertain path. Thanks for being a part of the journey. We look forward to what’s to come and hope you continue on with us.


Learn how to start measuring variables the right way – and create better outcomes – with our two-hour Introduction to Applied Information Economics: The Need for Better Measurements webinar. $100 – limited seating.




Measuring the wrong variables is a Trojan horse that infiltrates virtually every organization. This phenomenon has a real cost that can be measured – and avoided.


A measurement isn’t useless if the sample size is small. You can actually use small sample sizes to learn something useful about anything – and use that insight to make better decisions.

Five Data Points Can Clinch a Business Case

Any decision can be made better through better measurements – and as these three examples show, just five data points can tip the scales in a business decision.


7 Simple Principles for Measuring Anything

measuring anything

If you want to make better decisions, using data to inform your decision-making is the way to go. Organizations largely understand this reality in principle, if not always in practice. But it’s not enough to just gather data; you have to know how to use it, and unfortunately, most organizations don’t.

Fortunately, quantitative analysis doesn’t have to be that difficult. We’ve found over the decades that much of what interferes with data-driven decision-making is confusion, misunderstanding, and misconceptions about measurement in general. As our work with leading public and private entities across the globe has proven, these obstacles can be overcome. Anything can be measured, and if you can measure it, you can improve your decisions.

Below are seven simple principles for making better measurements that form the basis of our methodology, Applied Information Economics (AIE), and – if applied – can quickly improve the quality of the decisions you make (Figure 1).

Figure 1: 7 Simple Rules for Measuring Anything – Hubbard Decision Research

7 Simple Rules for Measuring Anything – Hubbard Decision Research


Rule 1: If It Matters, It Can Be Measured

Nothing is impossible to measure. We’ve measured concepts that people thought were immeasurable, like customer/employee satisfaction, brand value and customer experience, reputation risk from a data breach, the chances and impact of a famine, and even how a director or actor impacts the box office performance of a movie. If you think something is immeasurable, it’s because you’re thinking about it the wrong way.

Put simply:

  1. If it matters, it can be observed or detected.
  2. If it can be detected, then we can detect it as an amount or in a range of possible amounts.
  3. If it can be detected as a range of possible amounts, we can measure it.

If you can measure it, you can then tell yourself something you didn’t know before. Case in point: the Office of Naval Research and the United States Marine Corps wanted to forecast how much fuel would be needed in the field. There was a lot of uncertainty about how much would needed, though, especially considering fuel stores had to be planned 60 days in advance. The method they used to create these forecasts was based on previous experience, educated assumptions, and the reality that if they were short on fuel, lives could be lost. So, they had the habit of sending far more fuel to the front than they needed, which meant more fuel depots, more convoys, and more Marines in harm’s way.

After we conducted the first round of measurements, we found something that surprised the Marines: the biggest single factor in forecasting fuel use was if the roads the convoys took were paved or gravel (figure 1).

Figure 1: Average Changes in Fuel

We also found that the Marines were measuring variables that provided a lot less value. More on that later.

Rule 2: You Have More Data Than You Think

Many people fall victim to the belief that to make better decisions, you need more data – and if you don’t have enough data, then you can’t and shouldn’t measure something.

But you actually have more data than you think. Whatever you’re measuring has probably been measured before. And, you have historical data that can be useful, even if you think it’s not enough.

It’s all about asking the right questions, questions like;

  • What can email traffic tell us about teamwork?
  • What can website behavior tell us about engagement?
  • What can publicly available data about other companies tell us about our own?


Rule 3: You Need Less Data Than You Think

Despite what the Big Data era may have led you to believe, you don’t need troves and troves of data to reduce uncertainty and improve your decision. Even small amounts of data can be useful. Remember, if you know almost nothing, almost anything will tell you something.

Rule 4: We Measure to Reduce Uncertainty

One major obstacle to better quantitative analysis is a profound misconception of measurement. For that, we can blame science, or at least how science is portrayed to the public at large. To most people, measurement should result in an exact value, like the precise amount of liquid in a beaker, or a specific number of this, that, or the other.

In reality, though, measurements don’t have to be that precise to be useful. The key purpose of measurement is to reduce uncertainty. Even marginal reductions in uncertainty can be incredibly valuable.

Rule 5: What You Measure Most May Matter Least

What if you already make measurements? Let’s say you collect data and have some system – even an empirical, statistics-based quantitative method – to make measurements. Your effectiveness may be severely hampered by measuring things that, at the end of the day, don’t really matter.

By “don’t really matter,” we mean that the value of these measurements – the insight they give you – is low because the variable isn’t very important or isn’t worth the time and effort it took to measure it.

What we’ve found is a bit unsettling to data scientists: some of the things organizations currently measure are largely irrelevant or outright useless – and can even be misleading. This principle represents a phenomenon called “measurement inversion” (Figure 2):

measurement problems

Figure 2: Measurement Inversion – Hubbard Decision Research

Unfortunately, it’s a very common phenomenon. The Marine Corps fell victim to it with their fuel forecasts. They thought one of the main predictors of fuel use for combat vehicles like tanks and Light Armored Vehicles (LAV) was the chance that they would make contact with the enemy. This makes sense; tanks burn more fuel in combat because they move around a lot more, and their gas mileage is terrible. In reality, though, that variable didn’t move the needle that much.

In fact, the more valuable predictive factor was whether or not the combat vehicle had been in a specific area beforeIt turns out that vehicle commanders, when maneuvering in an uncertain area (i.e. landmarks, routes, and conditions in that area they had never encountered before), tend to keep their engines running for a variety of reasons. That burns fuel.

(We also discovered that combat vehicle fuel consumption wasn’t as much of a factor as convoy vehicles, like fuel trucks, because there were less than 60 tanks in the unit we analyzed. There were over 2,300 non-combat vehicles.)

Rule 6: What You Measure Least May Matter Most

Much of the value we’ve generated for our clients has come through measuring variables that the client thought were either irrelevant or too difficult to measure. But often, these variables provide far more value than anyone thought!

Fortunately, you can learn how to measure what matters the most. The chart below demonstrates some of the experiences we’ve had with combating this phenomenon with our clients (Figure 3):

measurement inversion examples

Figure 3: Real Examples of Measurement Inversion – Hubbard Decision Research

This isn’t to say that the variables you’re measuring now are “bad.” What we’re saying is that uncertainty about how “good” or “bad” a variable is (i.e. how much value they have for the predictive power of the model) is one of the biggest sources of error in a model. In other words, if you don’t know how valuable a variable is, you may be making a measurement you shouldn’t – or may be missing out on making a measurement you should.

Rule 7: You Don’t Have to Beat the Bear

When making measurements, the best thing to remember is this: If you and your friends are camping, and suddenly a bear attacks and starts chasing you, you don’t have to outrun the bear – you only have to outrun the slowest person.

In other words, the quantitative method you use to make measurements and decisions only has to beat the alternative. Any empirical method you incorporate into your process can improve it if it provides more practical and accurate insight than what you were doing before.

The bottom line is simple: Measurement is a process, not an outcome. It doesn’t have to be perfect, just better than what you use now. Perfection, after all, is the perfect enemy of progress.

Incorporate these seven principles into your decision-making process and you’re already on your way to better outcomes.


Learn how to start measuring variables the right way – and create better outcomes – with our two-hour Introduction to Applied Information Economics: The Need for Better Measurements webinar. $100 – limited seating.




Commercial real estate modeling is becoming more widespread, but is still limited in several crucial  ways.


Is the number of people infected by Covid-19 being underreported? Here’s why we think that may be the case – to a significant degree.





Decisions aren’t made in a vacuum. They’re made in real life, and the best way to simulate the randomness in real life is through this powerful tool.

Trojan Horse: How a Phenomenon Called Measurement Inversion Can Massively Cost Your Company

measurement inversion


  • A Trojan horse is anything that introduces risk to an organization through something that appears to be positive
  • Measuring the wrong variables is a Trojan horse that infiltrates virtually every organization
  • This phenomenon has a real cost that can be measured – and avoided

The Trojans stood at the walls, drunk from victory celebrations after they had previously watched the Greek fleets set sail away in retreat, having been defeated after nearly 10 years of constant warfare. They had little reason to suspect treachery when they saw the massive wooden horse just outside their gates, apparently a gift offering from the defeated Greeks. Because of their confidence – or overconfidence – they opened the gates and claimed the wooden horse as the spoils of war.

Later that night, after the Trojans lay in drunken  stupor throughout the city, a force of Greek soldiers hidden in the horse emerged and opened the gates to the Greek army that had not retreated but had actually lay in wait just beyond sight of the city. Swords drawn and spears hefted, the Greek soldiers spread throughout the city and descended upon its people.

The end result is something any reader of The Illiad knows well: the inhabitants of Troy were slaughtered or sold into slavery, the city was razed to the ground, and the term “Trojan horse” became notorious for something deceitful and dangerous hiding as something innocuous and good.

Organizations are wising up to the fact that quantitative analysis is a vital part of making better decisions. Quantitative analysis can even seem like a gift, and used properly, it can be. However, the act of measuring and analyzing something can, in and of itself, introduce error – something Doug Hubbard calls the analysis placebo. Put another way, merely quantifying a concept and subjecting the data to an analytical process doesn’t mean you’re going to get better insights.

It’s not just what data you use, although that’s important. It’s not even how you make the measurements, which is also important. The easiest way to introduce error into your process is to measure the wrong things – and if you do, you’re bringing a Trojan horse into your decision-making.

Put another way, the problem is an insidious one: what you’re measuring may not matter at all, and may just be luring you into a false sense of security based on erroneous conclusions.

The One Phenomenon Every Quantitative Analyst Should Fear

Over the past 20 years and throughout over 100 measurement projects, we’ve found a peculiar and pervasive phenomenon: that what organizations tend to measure the most often matters the least – and what they aren’t measuring tends to matter the most. This phenomenon is what we call measurement inversion, and it’s best demonstrated by the following image of a typical large software development project (Figure 1):

measurement problems

Figure 1: Measurement Inversion

Some examples of measurement inversion we’ve discovered are shown below (Figure 2):

measurement inversion examples

Figure 2: Real Examples of Measurement Inversion

There are many reasons for measurement inversion, ranging from the innate inconsistency and overconfidence in subjective human assessment to organizational inertia where we measure what we’ve always measured, or what “best practices” say we should measure. Regardless of the reason, every decision-maker should know one, vital reality: measurement inversion can be incredibly costly.

Calculating the Cost of Measurement Inversion for Your Company

The Trojan horse cost Troy everything. That probably won’t be the case for your organization, as far as one measurement goes. But there is a cost to introducing error into your analysis process, and that cost can be calculated like anything else.

We uncover the value of each piece of information with a process appropriately named Value of Information Analysis (VIA). VIA is based on the simple yet profound premise that each thing we decide to measure comes with a cost and an expected value, just like the decisions these measurements are intended to inform. Put another way, as Doug says in How to Measure Anything, “Knowing the value of the measurement affects how we might measure something or even whether we need to measure it at all.” VIA is designed to determine this value, with the theory that choosing higher-value measurements should lead to higher-value decisions.

Over time, Doug has uncovered some surprising revelations using this method:

  • Most of the variables used in a typical model have an information value of zero
  • The variables with the highest information value were usually never measured
  • The most measured variables had low to no value.

The lower the information value of your variables, the less value you’ll generate from your model. But how does this translate into costs?

A model can calculate what we call your Overall Expected Opportunity Loss (EOL), or the average of each expected outcome that could happen as a result of your current decision, without measuring any further. We want to get the EOL as close to zero as possible. Each decision we make can either grow the EOL or shrink it. And each variable we measure can influence those decisions. Ergo, what we measure impacts our expected loss, for better or for worse.

If the variables you’re measuring have a low information value – or an information value of zero – you’ll waste resources measuring them and do little to nothing to reduce your EOL. The cost of error, then, is the difference between your EOL with these low-value variables and the EOL with more-valuable variables.

Case in point: Doug performed a VIA for an organization called CGIAR. You can read the full case study in How to Measure Anything, but the gist of the experience is this: by measuring the right variables, the model was able to reduce the EOL for a specific decision – in this case, a water management system – from $24 million to under $4 million. That’s a reduction of 85%.

Put another way, if they had measured the wrong variables, then they would’ve incurred a possible cost of $20 million, or 85% of the value of the decision.

The bottom line is simple. Measurement inversion comes with a real cost for your business, one that can be calculated. This raises important questions that every decision-maker needs to answer for every decision:

  1. Are we measuring the right things?
  2. How do we know if we are?
  3. What is the cost if we aren’t?

If you can answer these questions, and get on the right path toward better quantitative analysis, you can be more like the victorious Greeks – and less like the citizens of a city that no longer exists, all because what they thought was a gift was the terrible vehicle of their destruction. 


Learn how to start measuring variables the right way – and create better outcomes – with our hybrid learning course, How To Measure Anything: Principles of Applied Information Economics.


Going Beyond the Usual Suspects in Commercial Real Estate Modeling: Finding Better Variables and Methods

commercial real estate modeling


  • Quantitative commercial real estate modeling is becoming more widespread, but is still limited in several crucial ways
  • You may be measuring variables unlikely to improve the decision while ignoring more critical variables
  • Some assessment methods can create more error than they remove
  • A sound quantitative model can significantly boost your investment ROI

Quantitative commercial real estate modeling (CRE), once the former province of only the high-end CRE firms on the coasts, has become more widespread – for good reason. CRE is all about making good decisions about investments, after all, and research has repeatedly shown how even a basic statistical algorithm outperforms human judgment<fn>Dawes, R. M., Faust, D., & Meehl, P. E. (n.d.). Statistical Prediction versus Clinical Prediction: Improving What Works. Retrieved from</fn><fn>N. Nohria, W. Joyce, and B. Roberson, “What Really Works,” Harvard Business Review, July 2003</fn>.

Statistical modeling in CRE, though, is still limited, for a few different reasons, which we’ll cover below. Many of these limitations actually result in more error (one common misconception is merely having a model improves accuracy, but sadly that’s not the case). Even a few percentage points of error can result in significant losses. Any investor that has suffered from a bad investment knows all too well how that feels. So, through better quantitative modeling, we can decrease the chance of failure.

Here’s how to start.

The Usual Suspects: Common Variables Used Today

Variables are what you’re using to create probability estimates – and, really, any other estimate or calculation. If we can pick the right variables, and figure out the right way to measure them (more on that later), we can build a statistical model that has more accuracy and less error.

Most commercial real estate models – quantitative or otherwise – make use of the same general variables. The CCIM Institute, in its 1Q19 Commercial Real Estate Insights Report, discusses several, including:

  • Employment and job growth
  • Gross domestic product (GDP)
  • Small business activity
  • Stock market indexes
  • Government bond yields
  • Commodity prices
  • Small business sentiment and confidence
  • Capital spending

Data for these variables is readily available. For example, you can go to and check out their Weekly Schedule for a list of all upcoming reports, like the Dallas Fed Survey of Manufacturing Activity, or the Durable Goods Orders report from the Census Bureau.

The problem, though, is twofold:

  1. Not all measurements matter equally, and some don’t matter at all.
  2. It’s difficult to gain a competitive advantage if you’re using the same data in the same way as everyone else.

Learning How to Measure What Matters in Commercial Real Estate

In How to Measure Anything: Finding the Value of Intangibles in Business, Doug Hubbard explains a key theme of the research and practical experience he and others have amassed over the decades: not everything you can measure matters.

When we say “matters,” we’re basically saying that the variable has predictive power. For example, check out Figure 1. These are cases where the variables our clients were initially measuring had little to no predictive power compared to the variables we found to be more predictive. This is called measurement inversion.

measurement inversion examplesFigure 1: Real Examples of Measurement Inversion

The same principle applies in CRE. Why does measurement inversion exist? There are a few reasons: variables are often chosen based on intuition/conventional wisdom/experience, not statistical analysis or testing; decision-makers often assume that industries are more monolithic than they really are when it comes to data and trends (i.e. all businesses are sufficiently similar that broad data is good enough); intangibles that should be measured are viewed as “impossible” to measure; and/or looking into other, “non-traditional” variables comes with risk that some aren’t willing to take. (See Figure 2 below.)

solving measurement inversion problemFigure 2: Solving the Measurement Inversion Problem

The best way to begin overcoming measurement inversion is to get precise with what you’re trying to measure. Why, for example, do CRE investors want to know about employment? Because if too many people in a given market don’t have jobs, then that affects vacancy rates for multi-family units and, indirectly, vacancy rates for office space. That’s pretty straightforward.

So, when we’re talking about employment, we’re really trying to measure vacancy rates. Investors really want to know the likelihood that vacancy rates will increase or decrease over a given time period, and by how much. Employment trends can start you down that path, but by itself isn’t not enough. You need more predictive power.

Picking and defining variables is where a well-built CRE quantitative model really shines. You can use data to test variables and tease out not only their predictive power in isolation (through decomposition and regression), but also discover relationships with multi-variate analysis. Then, you can incorporate simulations and start determining probability.

For example, research has shown<fn>Heinig, S., Nanda, A., & Tsolacos, S. (2016). Which Sentiment Indicators Matter? An Analysis of the European Commercial Real Estate Market. ICMA Centre, University of Reading</fn> that “sentiment,” or the overall mood or feeling of investors in a market, isn’t something that should be readily dismissed just because it’s hard to measure in any meaningful way. Traditional ways to measure market sentiment can be dramatically improved by incorporating tools that we’ve used in the past, like Google Trends. (Here’s a tool we use to demonstrate a more predictive “nowcast” of employment using publicly-available Google Trend information.)

To illustrate this, consider the following. We were engaged by a CRE firm located in New York City to develop quantitative models to help them make better recommendations to their clients in a field that is full of complexity and uncertainty. Long story short, they wanted to know something every CRE firm wants to know: what variables matter the most, and how can we measure them?

We conducted research and gathered estimates from CRE professionals involving over 100 variables. By conducting value of information calculations and Monte Carlo simulations, along with using other methods, we came to a conclusion that surprised our client but naturally didn’t surprise us: many of the variables had very little predictive power – and some had far more predictive power than anyone thought.

One of the latter variables wound up reducing uncertainty in price by 46% for up to a year in advance, meaning the firm could more accurately predict price changes – giving them a serious competitive advantage.

Knowing what to measure and what data to gather can give you a competitive advantage as well. However, one common source of data – inputs from subject-matter experts, agents, and analysts – is fraught with error if you’re not careful. Unfortunately, most organizations aren’t.

How to Convert Your Professional Estimates From a Weakness to a Strength

We mentioned earlier how algorithms can outperform human judgment. The reasons are numerous, and we talk about some of them in our free guide, Calibrated Probability Assessments: An Introduction.

The bottom line is that there are plenty of innate cognitive biases that even knowledgeable and experienced professionals fall victim to. These biases introduce potentially disastrous amounts of error that, when left uncorrected, can wreak havoc even with a sophisticated quantitative model. (In The Quants, Scott Patterson’s best-selling chronicle of quantitative wizards who helped engineer the 2008 collapse, the author explains how overly-optimistic, inaccurate, and at-times arrogant subjective estimates undermined the entire system – to disastrous results.)

The biggest threat is overconfidence, and unfortunately, the more experience a subject-matter expert has, the more overconfident he/she tends to be. It’s a catch-22 situation.

You need expert insight, though, so what do you do? First, understand that human judgments are like anything else: variables that need to be properly defined, measured, and incorporated into the model.

Second, these individuals need to be taught how to control for their innate biases and develop more accuracy with making probability assessments. In other words, they need to be calibrated.

Research has shown how calibration training often results in measurable improvements in accuracy and predictive power when it comes to probability assessments from humans. (And, at the end of the day, every decision is informed by probability assessments whether we realize it or not.) Thus, with calibration training, CRE analysts and experts can not only use their experience and wisdom, but quantify it and turn it into a more useful variable. (Click here for more information on Calibration Training.)

Including calibrated estimates can take one of the biggest weaknesses firms face and turn it into a key, valuable strength.

Putting It All Together: Producing an ROI-Boosting Commercial Real Estate Model

How do you overcome this challenge? Unfortunately, there’s no magic button or piece of software that you can buy off the shelf to do it for you. A well-built CRE model, incorporating the right measurements and a few basic statistical concepts based on probabilistic assessments, is what will improve your chances of generating more ROI – and avoiding costly pitfalls that routinely befall other firms.

The good news is that CRE investors don’t need an overly-complicated monster of a model to make better investment decisions. Over the years we’ve taught companies how incorporating just a few basic statistical methods can improve decision-making over what they were doing at the time. Calibrating experts, incorporating probabilities into the equation, and conducting simulations can, just by themselves, create meaningful improvements.

Eventually, a CRE firm should get to the point where it has a custom, fully-developed commercial real estate model built around its specific needs, like the model mentioned previously that we built for our NYC client.

There are a few different ways to get to that point, but the ultimate goal is to be able to deliver actionable insights, like “Investment A is 35% more likely than Investment B at achieving X% ROI over the next six months,” or something to that effect.

It just takes going beyond the usual suspects: ill-fitting variables, uncalibrated human judgment, and doing what everyone else is doing because that’s just how it’s done.


Contact Hubbard Decision Research

Doug Hubbard to Give Public Lecture on “How to Measure Anything” at University of Bonn June 28, 2019

Many things seem impossible to measure – so-called “intangibles” like employee engagement, innovation, customer satisfaction, transparency, and more – but with the right mindset and approach, you can measure anything. That’s the lesson of Doug’s book How to Measure Anything: Finding the Value of Intangibles in Business, and that’s the focus of his public lecture at the University of Bonn in Bonn, Germany on June 28, 2019.

In this lecture, Doug will discuss:

  • Three misconceptions that keep people from measuring what they should measure, and how to overcome them;
  • Why some common “quantitative” methods – including many based on subjective expert judgment – are ineffective;
  • How an organization can use practical statistical methods shown by scientific research to be more effective than anything else.

The lecture is hosted by the university’s Institute of Crop Science and Resource Conservation, which studies how to improve agriculture practices in Germany and around the world. Doug previously worked in this area when he helped the United Nations Environmental Program (UNEP) determine how to measure the impact of and modify restoration efforts in the Mongolian desert. You can view that report here. You can also view the official page for the lecture on the university’s website here.

Top Under-the-Radar Cybersecurity Threats You May Not See Coming

cybersecurity threat

In every industry, the risk of cyber attack is growing.

In 2015, a team of researchers forecasted that the maximum number of records that could be exposed in breaches – 200 million – would increase by 50% from then to 2020. According to the Identity Theft Resource Center, the number of records exposed in 2018 was nearly 447 million – well over 50%. By 2021, damages from cybersecurity breaches will cost organizations $6 trillion a year. In 2017, breaches cost global companies an average of $3.6 million, according to the Ponemon Institute.

It’s clear that this threat is sufficiently large to rank as one of an organization’s most prominent risks. To this end, corporations have entire cybersecurity risk programs in place to attempt to identify and mitigate as much risk as possible.

The foundation of accurate cybersecurity risk analysis begins with knowing what is out there. If you can’t identify the threats, you can’t assess their probabilities – and if you can’t assess their probabilities, your organization may be exposed by a critical vulnerability that won’t make itself known until it’s too late.

Cybersecurity threats may vary in specific from entity to entity, but in general, there are several common dangers that may be flying under the radar – and may be some you haven’t seen coming until now.

A Company’s Frontline Defense Isn’t Keeping Up the Pace

Technology is advancing at a more rapid rate than at any other point in human history: concepts such as cloud computing, machine learning, artificial intelligence, and Internet of Things (IoT) provide unprecedented advantages, but also introduce distinct vulnerabilities.

This rapid pace requires that cybersecurity technicians stay up to speed on the latest threats and mitigation techniques, but this often doesn’t occur. In a recent survey of IT professionals conducted by (ISC)^2, 43% indicated that their organization fails to provide adequate ongoing security training.

Unfortunately, leadership in companies large and small have traditionally been reluctant to invest in security training. The primary reason is mainly psychological; decision-makers tend to view IT investment in general as an expense that should be limited as much as possible, rather than as a hedge against the greater cost of failure.

Part of the reason why this phenomenon exists is due to how budgets are structured. IT investment adds to operational cost. Decision-makers – especially those in the MBA generation – are trained to reduce operational costs as much as possible in the name of greater efficiency and higher short-term profit margins. This mindset can cause executives to not look at IT investments as what they are: the price of mitigating greater costs.

Increases in IT security budgets also aren’t pegged to the increase of a company’s exposure, which isn’t static but fluctuates (and, in today’s world of increasingly-sophisticated threats, often increases).

The truth is, of course, that investing in cybersecurity may not make a company more money – a myopic view – it can keep a company from losing more money.

Another threat closely related to the above is how decision-makers tend to view probabilities. Research shows that decision-makers often overlook the potential cost of a negative event – like a data breach – in favor of its relatively-low probability (i.e. “It hasn’t happened before, or it probably won’t happen, so we don’t have to worry as much about it.”). These are called tail risks, risks that have disproportionate costs to their probabilities. In other words, they may not happen as frequently, but when they do, the consequences are often catastrophic.

There’s also a significant shortfall in cybersecurity professionals that is inducing more vulnerability into organizations that already are stressed to their maximum capacity. Across the globe, there are 2.93 million fewer workers than are needed. In North America, that number, in 2018, was just under 500,000.

Nearly a quarter of respondents in the aforementioned (ISC)^2 survey said they had a “significant shortage” in cybersecurity staff. Only 3% said they had “too many” workers. Overall, 63% of companies reported having fewer workers than they needed. And 59% said they were at “extreme or moderate risk” due to their shortage. (Yet, 43% said they were either going to not hire any new workers or even decrease the number of security personnel on their rosters.)

A combination of less training, inadequate budgets, and fewer workers all contribute to a major threat to security that many organizations fail to appreciate.

Threats from Beyond Borders Are Difficult to Assess – and Are Increasing

Many cybersecurity professionals correctly identify autonomous individuals and entities as a key threat – the stereotypical hacker or a team within a criminal organization. However, one significant and overlooked vector is the threat posed by other nations and foreign non-state actors.

China, Russia, and Iran are at the forefront of countries that leverage hacking in a state-endorsed effort to gain access to proprietary technology and data. In 2017, China implemented a law requiring any firm operating in China to store their data on servers physically located within the country, creating a significant risk of the information being accessed inappropriately. China also takes advantage of academic partnerships that American universities enjoy with numerous companies to access confidential data, tainting what should be the purest area of technological sharing and innovation.  

In recent years, Russia has noticeably increased its demand to review the source code for any foreign technology being sold or used within its borders. Finally, Iran contains numerous dedicated hacking groups with defined targets, such as the aerospace industry, energy companies, and defense firms.

More disturbing than the source of these attacks are the pathways they use to acquire this data – including one surprising method. A Romanian source recently revealed to Business Insider that when large companies sell outdated (but still functional) servers, the information isn’t always completely wiped. The source in question explained that he’d been able to procure an almost complete database from a Dutch public health insurance system; all of the codes, software, and procedures for traffic lights and railway signaling for several European cities; and an up-to-date employee directory (including access codes and passwords) for a major European aerospace manufacturer from salvaged equipment.

A common technique used by foreign actors in general, whether private or state-sponsored, is to use legitimate front companies to purchase or partner with other businesses and exploit the access afforded by these relationships. Software supply chain attacks have significantly increased in recent years, with seven significant events occurring in 2017, compared to only four between 2014 and 2016. FedEx and Maersk suffered approximately $600 million in losses from a single such attack.

The threat from across borders can be particularly difficult to assess due to distance, language barriers, a lack of knowledge about the local environment, and other factors. It is, nonetheless, something that has to be taken into consideration by a cybersecurity program – and yet often isn’t.

The Biggest Under-the-Radar Risk Is How You Assess Risks

While identifying risks is the foundation of cybersecurity, appropriately analyzing them is arguably more important. Many commonly used methods of risk analysis can actually obscure and increase risk rather than expose and mitigate it. In other words, many organizations are vulnerable to the biggest under-the-radar threat of them all: a broken risk management system.

Qualitative and pseudo-quantitative methods often create what Doug Hubbard calls the “analysis placebo effect,”(add footnote) where tactics are perceived to be improvements but offer no tangible benefits. This can increase vulnerabilities by instilling a false sense of confidence, and psychologists have shown that this can occur even when the tactics themselves increase estimate errors. Two months before a massive cyber attack rocked Atlanta in 2018, a risk assessment revealed various vulnerabilities, but the fix actions to address these fell short of actually resolving the city’s exposure—although officials were confident they had adequately addressed the risk.

Techniques such as heat maps, risk matrices, and soft scoring often fail to inform an organization regarding which risks they should address and how they should do so. Experts indicate that “risk matrices should not be used for decisions of any consequence,<fn>Thomas, Philip & Bratvold, Reidar & Bickel, J. (2013). The Risk of Using Risk Matrices. SPE Economics & Management. 6. 10.2118/166269-MS.</fn>” and they can be even “worse than useless.<fn>Anthony (Tony) Cox, L. (2008), What’s Wrong with Risk Matrices?. Risk Analysis, 28: 497-512. doi:10.1111/j.1539-6924.2008.01030.x</fn>” Studies have repeatedly shown, in numerous venues, that collecting too much data, collaborating beyond a certain point, and relying on structured, qualitative decision analyses consistently produce worse results than if these actions had been avoided.

It’s easy to assume that many aspects of cybersecurity are inestimable, but we believe that anything can be measured. If it can be measured, it can be assessed and addressed appropriately. A quantitative model that circumvents overconfidence commonly seen with qualitative measures, uses properly-calibrated expert assessments, knows what information is most valuable and what isn’t, and is built on a comprehensive, multi-disciplinary framework can provide actionable data to guide appropriate decisions.

Bottom line: not all cybersecurity threats are readily apparent, and the most dangerous ones can easily be ones you underestimate, or don’t see coming at all. Knowing which factors to measure and how to quantify them can help you identify the most pressing vulnerabilities, which is the cornerstone of effective cybersecurity practices

For more information on how to create a more effective cybersecurity system based on quantitative methods, check out our How to Measure Anything in Cybersecurity Risk webinar