We do not endorse any of the component forecasts. Otherwise these are all the useable forecasts that we know about. We have not excluded any forecasts based on our judgement of quality. Please do let us know if there are any other published forecasts that you think we may have missed and could be using. More generally, the methodology is still under development so comments very welcome.
Citizen forecasts come from the results of representative surveys of voters, asking them what they think the outcome will be. Such voter expectation surveys have an excellent track record, arguably better than polls, prediction markets, quantitative models or expert judgment for US presidential elections. The percentage who think that Remain will win is taken as a collective estimate for the probability of a Remain win. Results of voter expectation surveys are listed here. For each pollster we take the average of the last two such polls within the last three months, and then average across pollsters.
As of the 8th June 2016, this component of the forecast now allocates 50% of respondents who say they don’t know which side will win to the Remain percentage, and the other 50% to the Leave percentage. The probability of a Remain win is then calculated as the percentage of all respondents in the poll who think Remain will win, rather than a percentage of the respondents who offered an opinion as in previous forecasts.
One poll, from ORB, has asked respondents to say what proportion of people they think will vote to Remain, and what proportion they think will vote to leave. Responses are then averaged to give a collective prediction of the vote shares. This is included as a citizen forecast within our combined forecast for the share of the vote going to each side.
This category includes predictions of the vote shares given by the Times Red Box sweepstake podcast contributors (in addition to a few others who have offered predictions through the sweepstake who we have identified as experts). It also includes, from the 8th June, predictions from the Political Studies Association Expert Survey, available here. This was sent to a number of academics, journalists, and pollsters. For the vote shares, we take the average prediction from the Red Box sweepstake and the PSA Expert Survey, weighted by their respective sample sizes.
For the expert forecast of the probability of a Remain win we use only the PSA Expert Survey, which explicitly asked the respondents to assign a probability to Leave winning a majority in the referendum.
There are various other published expert forecasts but they are not used here because they do not provide figures for either the share of the vote or the probability of one side winning. They too overwhelmingly point to Remain winning.
Philip Tetlock’s Good Judgement project encourages people to forecast the outcomes of various social and political events and helps them learn and improve their forecasting skills. Tetlock claims that given some training, effort and practice, reasonably intelligent citizens can forecast better than experts, at least collectively. One of their forecasting challenges is the outcome of the Brexit referendum (here).
In addition we use the volunteered contributions to the Times Red Box sweepstake, i.e. the participants excluding the podcast contributors. For some academics and others that we know of we have reclassified their forecasts as expert. The proportion of these volunteered contributions giving predicted Remain shares above 50% is again taken to be the probability of a Remain vote. The median Remain vote share is used for the share forecast.
These are websites which allow people to bet on the outcome directly with other participants, without a bookmaker setting the odds (explanation here). They are much lauded as a forecasting tool by many economists and business people because they draw on views from a wide range of people willing to risk their own money. For election forecasting they arguably have a better track record than polls, quantitative models and expert judgement.
For the Brexit referendum the prediction market websites only have markets for which side will win not the share of the vote. We use data from predictit and hypermind. We also use spread-betting markets from sportingindex and ig, taking the mid-point of the spread as the predicted probability or vote share.
Because of low trading rates we do not use ipredict.
These are traditional bookmakers. Even though the odds are formally set by the bookmakers, with enough people betting they are primarily driven by what the punters are willing to accept. We average across major bookmakers listed here after correcting for the over-round (whereby the sum of the implied probabilities from the published odds is more than 100).
Bookies allow people to bet on the share of the vote within particular bands (e.g. 45-50% Remain). To generate a combined vote share forecast we take the mid-points of the bands and weight them by the (corrected) implied probabilities. For large bands that extend to 0 or 100 we do not use the mid-points but figures five points from the interior bound. For example, if the band is 75% to 100%, we use 80% for the share calculation. Implied probabilities for these extreme bands are very small so the choice of mid-point makes little difference to the calculations.
For the share of the vote we use the average of the polling averages that are published by whatukthinks, Ben Stanley, Number Cruncher Politics, the FT, and ElectionsEtc. These polling averages typically aim to correct for differences between pollsters and so they should not fluctuate too much according to whether the most recent polls were online or telephone or from a particular company. So what we are calculating is a poll of polls of polls!
Since polling averages do not reflect the range of variance in the polls very well, we generate a pseudo probability for Remain winning from the proportion of polls that have Remain ahead. For this we take just the last two polls from each company-method combination within the last two months. So if a company has published two online polls and two phone polls in the last two months these are treated separately. Here we make no attempt to balance between online and telephone polls even though there are slightly fewer companies doing phone polls and they more clearly point more towards Remain.
Poll based forecasting models
Polls are a snap shot of opinion at the time they are taken. The historical relationship between polls and referendum outcomes tells us something about the direction and extent of any likely change in opinion, as well as the level of uncertainty we can expect of the outcome. Both ElectionsEtc.com and Number Cruncher Politics provide forecasts of this kind for both the probability of Remain winning and vote shares.
Non-poll based models
We have identified a number of miscellaneous other forecasting models for the EU referendum, all of which happen to only provide forecasts of the vote shares for Remain and Leave rather than a probability of Remain.
A traditional approach to election forecasting is to use historical data to develop a statistical model based on factors that are expected to influence the vote. These factors are often referred to as the “fundamentals”. See here for an introduction to these kinds of models for US elections. There is only one example of this approach that we know of and it is from Matt Qvortrup (see here and here, with the latest estimate here). His model is based on GDP, inflation, and the length of time the government has been in office.
A further set of models were included from the 8th June onwards. Those from Euro Correspondent and UK General Election 2020 combine information on the vote share gained by the various parties in past elections and estimates of the proportion of each parties’ supporters are likely to vote for Remain. Two such models use vote shares from the 2015 General Election (here and here) and one uses the 2014 European Parliament elections (here). Finally, we also include a model from dataiq.co.uk based on big data and social media, see here.