Power Law vs. Lognormal Distribution: Which is the Right Choice for My Model?

Power Law vs. Lognormal Distribution: Which is the Right Choice for My Model?

At Hubbard Decision Research, we’ve built dozens upon dozens of risk models ranging for companies of different sizes across wildly diverse areas. One of the questions we sometimes get from the more quantitatively affluent clients is “Should we use a power law distribution or a lognormal distribution to model the impact of this risk?”. On the surface, it seems like a simple question. However, when you’re dealing with highly uncertain ranges for some impacts, the question gets a bit more complicated. Ultimately, the choice comes down to a few key components: Identifying which approach best fists your data, understanding the uncertainties regarding growth and tail behavior, as well as existing assumptions regarding a specific impact (such as the natural limit of the impact or how the impact scales).  

Both power law distributions and log normal distributions are relatively common in quantitative risk modeling. They have some similarities because neither can result in a zero or negative value in a simulation and both have larger positive tails (which is very important in capturing ranges of impacts). However, the difference between them can drastically alter decisions if you’re not careful. Choosing a lognormal distribution when the data really fits a power law would result in you underestimating your losses by potentially astronomical amounts. Choosing a power law when the data really fits a lognormal could result in overprioritizing smaller risks when they were not really justified. Both of these scenarios are not ideal and having the flexibility to accurately fit your data to the correct distribution is essential for prioritizing mitigations or controls (especially large portfolios of mitigations/controls).

How do you choose which to use? There are a few approaches to this, and the most noticeable differences live in the tail of the distributions. A power law distribution assumes that these extreme events happen more often than a lognormal does. If your business decisions depend on how, you treat these rare events, this difference can have a big impact. If you’re worried about a few massive events driving most of your losses, and your data suggests there’s no natural limit to how bad things can get, the power law might be the better fit compared to a lognormal distribution. On the other hand, if you’re modeling something that tends to grow or spread gradually, like cost overruns or delays, with a known limit for about how bad losses/impacts can be, the lognormal could be more realistic alternative.

One of the key differences to understand is how each distribution handles growth. A lognormal distribution assumes a steady, compounding process, like many small risks building up over time. For example, if you are modeling network risk, equipment ages and wears out over time. Those losses would likely accumulate steadily and fit a lognormal distribution.  A power law, on the other hand, assumes a more chaotic buildup, where both the size and number of risks can grow unpredictably. In our network risk example, this might be reflected best in large internet outages or application outages which are triggered by uncertain yet cascading failures. So, while lognormal reflects consistent compounding, power law reflects compounding under deep uncertainty.

The best-case scenario is to use the data you have available to compare which distribution fits best. Also, there is a lot of historical precedence for the use of certain distributions so don’t fly solo if you don’t have to. An axiom we have at HDR is that it has probably been measured before. Look at what others have done and the rationale for why they have done so as you make your choice. Risk modeling isn’t about being perfect, it’s about improving upon the existing approach. If you’re currently using qualitative or pseudo-quantitative approaches like scales or scores, either option will probably move you in a better direction. On the book website (linked at the bottom) you can find a spreadsheet that goes with Appendix A where you can input elements to generate both lognormal distributions and power law distributions. I encourage you to experiment with these and familiarize yourself with the characteristics of these distributions. In Figure 1, I used a simple simulation with 1000 trials comparing monetary losses using a Lomax power law and a lognormal distribution which have the same average which highlights the need to understand the general trends of the data, as well as the assumptions outlined earlier.

Figure 1: Simulation Output Power Law vs. Lognormal with the Same Mean

 

Don’t let perfect be the enemy of good (to quote Voltaire). However, recognize when and where different distributions can and should be used. If this is something you’re having trouble with, HDR helps clients with this on a daily basis. Measure what matters, make better decisions, and keep moving in the right direction.

How To Measure Anything in Cybersecurity Risk | Downloads

Are Your KPIs Actually KPIs?

Are Your KPIs Actually KPIs?

In nearly every organization, Key Performance Indicators (KPIs) are a staple of performance tracking and strategy alignment. But how often do we stop to question whether our KPIs are truly key, or even indicators of performance? Was there an analysis done to evaluate the efficacy of these KPIs? At what value, for a given KPI, do we need to take action or intervene? From what we’ve seen, that answer to these questions often leaves a lot to be desired.

Too often, businesses select KPIs that are not actually impactful but provide a false sense of security because data is being analyzed and tracked. We see organizations track metrics like total page views, hours worked, or number of meetings scheduled that tell us little about actual business outcomes. These metrics are often shown on a dashboard and are not tied to any specific decision or intervention.

Effective KPIs should be tied directly to a decision. If a KPI doesn’t drive an action or inform a choice, it’s just adding to the noise. For example, tracking average customer response time is only valuable if it helps improve satisfaction or retention. For example, at what level of customer response time does it become justifiable to intervene to protect retention? If it’s measured but ignored, it’s not a KPI, it’s background data. While that may be useful or worth tracking, if it isn’t impacting decisions or flagging areas to intervene, it’s not a KPI.

At HDR we have observed a phenomenon called the Measurement Inversion (see Figure 1) in almost every single industry we’ve worked in. Organizations are often measuring and tracking data (at great expense) that has no actual influence on a decision. When we run information value calculations in decision models, we routinely see little-to-no value in the metrics that are being tracked and massive value for other metrics in the model (metrics that could actually influence a decision one way or the other).

Figure 1: Depiction of the Measurement Inversion

 

At the end of the day, KPIs are not only about data, they’re about decisions. The most valuable metrics illuminate what matters, show what’s working, and prompt better actions. If your current KPIs don’t do that, it’s time to rethink what you’re measuring and how you’re measuring it.

The Role of Noise in Risk Management

The Role of Noise in Risk Management

The Role of Noise in Risk Management: A psychologist’s take on an often underappreciated and often misunderstood topic.

Prior to working in management consulting, I was a career academic. I taught a variety of psychology courses at a university and conducted cognitive psychology research (although I also engaged in large-scale replication work trying to address fundamental issues in how psychological science was conducted). Through my doctoral training, research, and professional experience in the field, I began to become acutely aware of how misconceptions or misunderstandings of key elements of psychology and neuroscience have crept into society’s day-to-day topics. Risk management, as a discipline, is no different. While there are certain elements of psychology within risk management, such as things like cognitive biases, which are reasonably well understood, communicated, and applied to risk management problems, there are others where I still see gaps (which is only natural).

One key element I think most people inherently understand about psychology is that as a discipline, it approaches the same problem or situation from multiple layers of analysis which all mesh together in our larger understanding and appreciation of our own reality. This ranges from rather large macro-level elements within social psychology and reduces all the way to cellular (and in some instances sub-cellular) levels within the nervous system. My area of expertise sits in the cognitive layer with a large emphasis on how neurological principles and cognitive function are related. From a basic science perspective, that is what always interested me as a researcher. One of the areas in this layer I was fascinated by was the concept of noise. Thanks to popular researchers, like the late Daniel Kahneman, the idea of noise in risk management has gained some traction in recent years. However, the element of noise I was interested in was far more fundamental than what Kahneman was describing. I was interested in neural noise and how it changes, and fundamentally increases, across the lifespan. I won’t go into all of the details here but essentially; neural noise influences every single element of cognitive processing from the sensory input stage all the way to behavioral output. Furthermore, the distribution of noise fundamentally increases over time as we age (2nd law of thermodynamics).

When people think about noise in risk management, I think they often miss the boat a little bit. They understand the elements that Kahneman describes in the book, which is a great starting point, but they miss the bigger picture. Every single thought, sensation, behavior, social interaction, and so on, is influenced by noise. Even the simplest of perceptual tasks are measurably influenced by noise. In the field of experimental psychology and psychophysics (no not crazy physics as fun as that would be) these concepts have been fundamental to how we think about perception, behavior, and the underpinnings of decision-making for over 150 years (dating back to Weber-Fechner Laws) and beyond. Concepts like “discriminal dispersion” which was introduced by Thurstone almost a hundred years ago (building on Weber’s and Fechner’s foundations) in his paper “The Law of Comparative Judgement”. Furthermore, these elements have been studied and expanded upon consistently over the past 50 years. Work from Lester Krueger and Philip Allen (and many others there is a long list of great researchers that spring to mind, but I won’t list them all out) inspired me to study this more and more. Interestingly, all of the probability concepts that are covered in the more quantitatively rigorous areas of risk management and probability theory align perfectly with these ideas which was largely my inspiration for joining Hubbard Decision Research in the first place.

Tying this back to risk management and the broader inspiration for me writing this came from a recent podcast titled “Unlocking Resilience. Mass Media & Prioritization.” with Brandon Daniels and David Merritt (I’ll put the link to that podcast at the bottom as it’s worth a listen). There is a lot of great substance in the podcast, but one key point stuck out to me which was made by David Merritt around the 19-minute mark in the podcast and it has to do with how to prioritize human attention and, ultimately, optimize those precious finite human resources. While this was one item in a broader conversation, it made me stop on my walk with my dog and write down a note on my phone’s notepad. Human attention, human cognition, and human performance in general is fundamentally influenced by noise and developing a risk quantification framework that buttresses and supports decision makers in the face of inevitable (and neurologically fundamental) noise is essential. Risk management at large organizations is complicated; there are so many moving pieces. Providing the right people with the right tools to make better strategic decisions under uncertainty and target the right risks ultimately requires a consistent, unambiguous, and stable approach to risk management despite the internal noise that we deal with as biological beings.

Doug Hubbard repeatedly says when we talk to clients (and he’s not alone in the risk quantification space with this) that “I’d rather not have to do that math in my head”. The field of quantitative risk management is, in essence, eclectic with psychology being an important, and often overlooked area. Understanding how foundational concepts within psychology apply to risk management is a competitive advantage. Essentially, every single cognitive process is in some way limited or capped. Recognizing those limitations and developing solutions to minimize the impact of those limitations (such as developing quantitative risk models rather than relying on unaided intuition or replacing sub-optimal qualitative scoring approaches which actually add more noise with even simple quantitative models) will protect organizations from their best (yet most flawed) asset, their people (most flawed might be an exaggeration but it helps get the point across).

Unlocking Resilience. Mass Med… – Cybercrime Magazine Podcast – Apple Podcasts