Metrics may represent reality, but they aren’t reality

The Midwest is bracing for some cold weather. Forecasts around the Chicago area predict some of the coldest temperatures in Chicago’s history (at least in the history of recorded temperatures).

As Chicagoans gear up for the cold, the “old timers” are rolling their eyes. They tell us that while our forecasted -55 wind chill is cold, it’s nothing like the -82 that they lived through back in the 1980s (when they used to walk to school uphill both ways). As far as they are concerned, we’re just a bunch of pampered wimps who are scared of the cold.

The problem is that they are wrong.

According to the Illinois Storm Chaser’s website, in 2001 the process and calculation for computing wind chill changed[1].

In terms of coldness, a -82 wind chill in 1983 is the same as a -55 wind chill in 2019. The metrics and their values are different but the reality they reflect is exactly the same.

Sometimes it’s easy to forget that all metrics are made up.

Metrics aren’t organic. You can’t mine them from the earth or grow them in a field. They are constructed by individuals. They are their creator’s best approximation of reality. BUT, they are not reality.

In his book, Reckoning with Risk, Gerg Gigerenzer tells an interesting story about computing risk.

Gigerenzer was touring a rocket manufacturing facility.

He noticed a list of the company’s 94 rocket launches. Eight of the launches were marked with an asterisk. He learned that an asterisk represented launches where an accident occurred.

He asked his tour guide how risky rocket launches were.

The guide stated that their rockets had a security factor of round 99.6%. Gigerenzer was puzzled. Eight out ninety-four launches had an accident. That’s an 8.5% accident rate. How could the security factor be 99.6%?

The guide explained that the metrics did not take into account accidents. Most accidents resulted from human error and the company didn’t believe that should be included in calculating the rocket’s security factor. Instead it was based on the design features of the individual parts of the rockets.

Gigerenzer’s 8.5% calculation was just as valid as the company’s 99.6% calculation. Each one was a valid way to measure and represent risk. However, both were based on how their inventor chose to represent the world. Neither is more right than the other. However, each one might have a set of decisions for which it is a better representation of reality.

All metrics are invented. Even something as simple as “defects per one thousand parts” has assumptions built into it. What is considered a defect? Does it only include defects that are discovered during production or does it include those that are discovered later? Every calculation has assumptions built into it.

In 2016 Facebook ran into a problem with advertisers. It turned out that Facebook was overestimating the average viewing time of its video ads.

Average viewing time should be a pretty straightforward calculation, right? Simply add up the duration of each view and divide by the number of views. So, what happened?

It turns out that a lot of people click on ads and abandon them almost immediately. Facebook wasn’t including those people in their calculation. They only included videos that were viewed for at least three seconds. Most likely Facebook had a good reason for not including those views. However, their choice caused the metric not to reflect reality.

Don’t confuse a metric with reality. Take time to understand:

·     The assumptions made about how the inventor wanted to represent reality

·     What is included and excluded from the calculation

·     How the formula is calculated

More importantly, use the metric in conjunction with your understanding of the business. Ask yourself if what you are seeing reflects reality. Compare it to other metrics which attempt to represent the same or a similar aspect of the business. Think critically about how you would represent the business and compare your model to the metric.

Metrics are powerful and useful tools. However, like any tool, if they are wielded in the wrong way, they are likely to do more harm than good.


Brad Kolar is an executive consultant, speaker, and author at Avail Advisors. Avail’s Rethinking Data workshop can help you quickly identify the metrics you need to drive your decisions. Contact Brad at



Print Friendly, PDF & Email