Category Archives: Polling methodology

Breaking the swingometer: historical precedents for proportional change

by Stephen Fisher and Jake Dibden, 26th June 2024

Seat projections for the Conservatives at next week’s general election range from really bad to totally dire. Given recent polls, traditional uniform-change projections suggest the Conservatives will win just around 190 seats. On average the Multilevel Regression and Poststratification (MRP) models suggest the Tories will win less than 100 seats. 

The main difference between them is that the MRP models estimate that the Conservative vote has dropped more where the party was stronger in 2019. That is to say, the drop is broadly proportional to prior strength rather than uniform (the same) across all constituencies.

Some MRP models and other forecasters have come unstuck in the past by predicting proportional change when uniform change predictions have been a better guide to seat tallies for the big parties in the post-war period (see Appendices of the Nuffield Election Studies). The efficacy of uniform change projections was so well established that they became the basis of the swingometer for election night programmes.

Since past vote choice is such a strong predictor of future vote choice, MRP models in effect have a 2019-2024 vote-transition matrix model at the heart of them. That in turn means MRP models tend to project proportional drops for parties in decline. The MRP modellers need a lot of data across constituencies and careful modelling to identify any counter-balancing pattern towards uniform change at the constituency level.

Are we likely to see predictions of proportional change come unstuck again this year? 

Continue reading Breaking the swingometer: historical precedents for proportional change

Why are polls from different pollsters so different?

By Stephen Fisher and Dan Snow.

On average the polls have had a fairly consistent and comfortable lead for the Conservatives in this general election campaign. However, around that average there are substantial differences between polls. Some suggest the Conservatives might fail to win a big enough lead to secure a majority, while others point to a Tory landslide with a majority over a hundred. What’s going on?

In short, since this is a long and complicated blog, our tentative conclusion is that the big systematic differences between pollsters are due primarily to systematic differences in the kinds of people they have in their samples, even after weighting. Some of the sample profile differences translate straightforwardly into headline differences. For instance, having more 2017 Conservatives in a sample means there will be more 2019 Conservatives. In other areas there are more puzzling findings. Polls vary in the extent to which women are more undecided than men and in the extent to which young adults are less certain to vote, but neither source of variation has the effect on headline figures that we would expect. Nonetheless for most of the aspects of the poll sample profiles we have inspected, it is remarkable the extent to which polls differ primarily between pollsters, with relatively little change over time for each pollster. This suggests that the way different pollsters have been going about collecting responses has yielded systematically different kinds of respondents for different pollsters. With a couple of exceptions, it seems as though it has been the process of data collection rather than post-hoc weighting and adjustment that may be driving pollster differences in this campaign.

As the graph below shows, a large part of the variation between polls is between pollsters. The pollsters have shown a similar pattern of change in the Conservative-Labour lead over time, most with a peak in mid-November and a slight decline since. The headline Conservative-Labour lead – the basis for the swingometer – is the main guide to seat outcomes. So an important question is why pollsters differ systematically in the size of their published Conservative leads.

ConLabLead

In this blog post we use data from the standard tables that pollsters have to publish as part of the requirements of British Polling Council membership. They contain a wealth of information about the profiles of the different survey samples both before and after weighting and adjustment. We collected data from such tables for all polls between the 30th of October (when parliament voted for an early general election) to the 4th of December (just over a week before the end of the campaign). There have been more polls since then but so far as we can tell they do not substantially change the issues we raise here.

Continue reading Why are polls from different pollsters so different?