There is a good chance that I’ll look very silly by writing my thoughts on this topic down before the election. I could wait until November 4th before penning some polemic for or against Nate Silver—declaring after the fact that he was “obviously right/wrong” and should probably just resign/get a medal. But the fact is, the result of the “model wars” will be far less clear cut.

At its core, election forecasting is a simple idea. We have some data about how people are going to vote and we want to know how likely possible election outcomes are. To guide us we have the polls from previous elections and their ultimate result. By throwing Fancy Statistics© at the problem, we can then estimate probabilities like “given Biden has a polling lead of X, what is the chance he will win?”

The big issue here is that this is not nearly as objective as people think. Modelling is equal parts maths, coding and art. A modeller must make decisions about what factors to account for and how to include them. These are often referred to as a model’s priors, the collection of “prior” ideas that go into it. These are as much an input as the polling data, historical elections or anything else. From choosing which elections to train your model on to whether to include economic data, every decision could have a massive effect.

The influence that a model’s creator has over its output means that different people can and do arrive at different conclusions. Take 2016: on the eve of the election FiveThirtyEight gave Clinton a 70% chance of winning, the New York Times 84% and the Princeton Election Consortium an incredible 99%. Despite having access to the exact same data, they came to very different answers.

It’s in the mess of the 2016 election that I think Nate showed why his approach is so strong. One of his priors was that polling error would be correlated across states. That is: if Trump (or Clinton) beat their polls in State A, it’s more likely they will beat them in State B. In the end, this is exactly what happened. A notable, but historically not unprecedented, polling error in Trump’s favour in a few key states was enough to carry the election. This is part of why 538 gave Trump a much better chance. Before the election Silver actually received quite a bit of flak for being too bullish on Trump’s chances.

There are far fewer big names in the election forecasting game this cycle, most having been burned by ’16. Notably however there has been a long running fight on Twitter between Nate Silver and G. Elliott Morris of the Economist about whose approach to modelling is better. Whilst it has become a little academic as the output of the two has converged, what has become clear is that they not only have different priors but fundamentally different philosophies about what modelling can and can’t do.

In this twitter exchange, Nate and Morris battle it out about “backtesting” models and what that process can tell us about their predictive accuracy. Here Silver, in my opinion, grasps one of the core limitations of modelling: without putting yourself out there and publishing a prediction, you have nothing to test. You can do all the backtesting you want but, especially with such a small sample size (~15 recent elections), you actually risk making your model worse, not better.

By embracing his own agency in model design and not trying to hide behind fundamentally limited backtesting, Nate Silver makes his model more honest. Yes, if he is wrong there is nowhere for him to hide; but he puts his decision making process out there for us to see. So be like Nate, embrace the uncertainty. Pick the model or models you trust based on which set of priors you think are reasonable. But remember this: just because a model’s got some seemingly-complex maths behind it does not mean it’s objective. By admitting that to ourselves we’ll learn much more.

Categories: Blog Posts

0 Comments

Leave a Reply

Avatar placeholder