top of page
  • Brendan Stec

Why We're Terrible at Forecasting

Think about most basic skills we can do proficiently. Writing a sentence, catching a ball, or cooking a meal are all skills learned from immediate and precise feedback, allowing us to get better and better with each try. Moreover, since cooking a poor meal or getting hit in the face with a ball are not pleasant experiences, we're usually incentivized to perform these skills competently.

Forecasting is different. While we forecast events every day - for example, estimating to a friend how much longer until arriving at a restaurant - we're rarely held accountable for the accuracy of these forecasts. We may say we'll arrive in 10 minutes (we really mean 25), but what's the consequence? It's usually not a huge issue. TV pundits also give all sorts of predictions about politics and the economy, but their prophecies are usually no more accurate than chance. The public rarely take accountable those who make poor forecasts on TV or online; at the end of the day, these pundits are really selling entertainment, not accuracy.

There are also unconscious cognitive biases and heuristics that can affect the accuracy of forecasts. Most of us are plagued by overconfidence. In studies where researchers ask participants to give 90% confidence intervals for various obscure facts, where the correct answer could always be [0,infinity], most participants give overly precise answers. Afraid to look unknowledgeable or incompetent, it's easy to succumb to overconfidence when forced to encounter uncertainty, a fact that can deeply impact forecasting accuracy.

The availability heuristic - where we base judgments and forecasts off of examples that immediately come to mind - is another dangerous mental shortcut. Under this heuristic, our forecasts can be strongly influenced by past events that may have had a significant impression on us. For example, it becomes tempting to overestimate the probability of a terrorist attack if a dramatic attack recently occurred (and is blasted all over the media). Likewise, an entrepreneur may be overly optimistic in forecasting the success of his new startup, as successful entrepreneurs of the past are often paraded around on magazines and TV, while unsuccessful entrepreneurs by the thousands are forgotten or overlooked.

How can we make better forecasts? In general, to combat the distortions of biases and heuristics, a disciplined, probabilistic approach is key. A good start is to get in the habit of challenging other people's forecasts. What incentive does a hairstylist have to give an accurate estimate for wait time? Does an NBA analyst really have an incentive to be correct with his projections for the draft? To avoid anchoring to a specific value, estimating first the lower and upper bounds of a forecast can also be helpful. And since many studies show the advantages of algorithmic prediction, considering a statistical or machine learning approach that emphasizes out of sample accuracy is another pragmatic tool. Finally, in instances where it's possible, aggregating independent individual forecasts, thus making use of the "wisdom of the crowds", is a powerful method for averaging out different biases to arrive at a single consensus.

References

1) The Signal and the Noise, Nate Silver

2) Superforecasting, Philip Tetlock

3) "The Hidden Traps in Decision Making" (originally published in the Harvard Business Review); John Hammond, Ralph Keeney, and Howard Raiffa

bottom of page