Skip to main navigation Skip to main content
The University of Southampton
Research

Polls apart

Leading the inquiry into the 2015 general election polls.

Published:
13 April 2016

Polling is a multi-million-pound industry, and the way in which the public most obviously connects to surveys. From a democratic perspective, polls enable people to vote strategically, particularly valuable in the ‘first past the post’ system we have here in the UK. Southampton researchers had a key role in investigating the causes of the polling errors around last year’s general election.

The polls on the eve of the 2015 general election all pointed to a dead heat, yet the following day, the Conservatives beat Labour by a seven-point margin. The day after the election, the British Polling Council and Market Research Society initiated an inquiry to investigate the extent of this polling error, and they asked Southampton’s Professor Patrick Sturgis to chair it.

Patrick, Professor of Research Methodology at the University and Director of the National Centre for Research Methods, explains why the University of Southampton was chosen for this role. “Here at Southampton we are one of the leading centres for survey statistics in the UK and internationally, with a long track record of research and training in statistical methodology for surveys.

“We have also established strong links with the Office for National Statistics over many years, as well as data collection agencies in the commercial sector such as Ipsos MORI and YouGov, which has enabled us to apply our research to real-world settings such as the general election polling misses,” he adds.

Analysing polling misses

Patrick and his team analysed the polling data and identified potential causes for the errors. With election polls, there is a finite number of things that can go wrong: for example, one possibility is ‘late swing’ – voters changing their minds at the last minute; another is ‘differential turnout misreporting’ – when more of the supporters of one party who were included in the poll don’t turn up to vote on election day. Even the way in which the questions were worded could affect the results. “We have looked into each of these potential causes in turn and found little or no evidence to support them being the main cause of the error,” says Patrick. “This left one potential big cause standing, which we have called ‘unrepresentative samples’. This means that the way the pollsters gathered and adjusted their sample data systematically over-represented Labour voters and under-represented Conservatives.”

Polling agencies in the UK gather their data using a procedure called ‘quota sampling’ to make their sample represent the voting population. Respondents complete questionnaires, either online or over the phone, and the pollsters then weight the sample to make it fit the national distribution of adults by age and gender by region. “We have identified this as a weakness in their methodology: it is quite a strong assumption that you will be able to adjust for all relevant variables when you do the weighting in this way,” says Patrick. An alternative approach would be to use random sampling but that is too costly and time-consuming for election polling.

The fact that all the polls were wrong in the same way – underestimating the Conservative lead over Labour – could have something to do with a phenomenon known as ‘herding’, Patrick explains. This is where the polls tend to give more similar results than would be expected ‘by chance’. It can arise from a number of different factors, such as when pollsters make adjustments to their raw data in ways that tend to pull them toward a group consensus.

Opinion polls are subject to many different sources of error and no set of procedures will ever guarantee total accuracy. But how polls are conducted and reported in the UK can and should be improved

Patrick Sturgis - Professor of Research Methodology

Political dimension

Will Jennings, Professor of Political Science and Public Policy and Director of the Centre for Citizenship, Globalisation and Governance at Southampton, was also part of the expert panel on the inquiry. Will is a political scientist who has studied the relationships between polls and the final election outcomes, in the UK and in many other countries. During the 2015 general election in the UK, Will and his team created a prominent forecasting model, featured on the New Statesman’s website, that took both historical and current poll data into account to give a probabilistic model that predicted the swing from the last election and therefore who would win each constituency.

“If you look over time at how opinion polls have performed since the 1940s, they were actually more predictive historically than they are now. This arguably reflects the nature of modern electorates, which is that voters are more volatile. They’re less aligned to social and economic groups, which traditionally meant that if you were from a particular background you were more likely to vote in a particular way,” Will explains. “This made pollsters’ jobs much easier.”

Today, there are many more polls carried out because of increased competitiveness in the polling industry and the reduced costs of internet polling. “Nearly 2,000 polls were carried out in the last election cycle alone, whereas from 1945 to 2010, there were only around 4,000 polls; so around a third of all polls ever conducted were carried out in the last five years. There might be an argument around pollsters having to think about new innovations to improve their methods, or returning to conducting fewer randomly sampled face-to-face surveys, which are more expensive but more reliable.”

Looking at historical polling data is also useful for forecasting because political parties tend to have underlying levels of support. “If the polls tell you a party is below its historical equilibrium, there’s a tendency for the party to gain in support in the final weeks; conversely, if it’s significantly above its historical equilibrium, there’s a tendency for it to drop,” says Will. “But this historical pattern didn’t materialise in 2015, and the polls didn’t move much during the campaign.”

Historical opinion poll results also help researchers understand systematic biases; a fundamental problem with the polls leading up to the 2015 general election was that the polled population was slightly different from the voting population.

“This has historically been one of the biggest challenges for the polling industry: the sorts of people who are responding to phone and internet polls are not representative of the general population. And you really see this amplified if you look at particular subsets of the poll data,” says Will. “The error was slightly bigger in 2015, and also it mattered a lot more because the polls predicted such a close finish, whereas in 1997 for example, where there were also polling errors, it didn’t matter so much in practice because Labour won by such a large margin.”

But how much do the polls themselves influence the election results? “This is very hotly disputed; some people have argued, for example, that the poll results influenced the outcome of the 2015 general election because the expectation of a hung parliament changed the content of the whole campaign, making it about whether the SNP would be ‘holding a minority Labour government to ransom’,” says Will. “However, there is a problem from a research point of view about how to prove this.”

Steps to more reliable polls?

So can the polls be made more reliable? “The polls are a useful tool, but we need to be more aware of the uncertainty in the estimates they produce. People tend to endow polls with more accuracy than they are capable of providing. These types of estimates are subject to various kinds of errors and looking at the historical record, they tend to be wrong on a not infrequent basis,” says Patrick.

There were inquiries into polling failures in the 1970 and 1992 general elections in the UK, there have been inquiries in the US, and there is one going on in Poland at the moment, so it’s certainly not unique to the UK political system or polling industry; these polling misses are prevalent across the world.

The inquiry panel made 12 recommendations to the polling industry in their Report of the Inquiry into the 2015 British General Election opinion polls . The Inquiry recommends a number of changes to British Polling Council (BPC) rules, such as requiring members to pre-register ‘vote intention’ polls before conducting fieldwork and providing statistical significance tests for changes in vote shares between polls. The report also makes recommendations for improving the representativeness of polling samples.

“Opinion polls are subject to many different sources of error and no set of procedures will ever guarantee total accuracy. But how polls are conducted and reported in the UK can and should be improved,” says Patrick. “We hope the recommendations set out in this report will be taken on board by the polling industry in order to reduce the risk of these kinds of polling errors being repeated in future elections.”

If you look over time at how opinion polls have performed since the 1940s, they were actually more predictive historically than they are now. This arguably reflects the nature of modern electorates, which is that voters are more volatile

Will Jennings - Professor of Political Science and Public Policy and Director of the Centre for Citizenship, Globalisation and Governance at Southampton

Impact on the EU referendum polls?

With the EU referendum fast approaching, how susceptible will these polls be to the errors encountered in the UK general election last year? Because referenda are far less frequent than general elections, it’s difficult to make historical comparisons. “One thing we do know,” says Will, “is that there tends to be ‘status quo bias’ in referendums: often the polls understate the status quo option because people are generally risk averse.”

Will and his team have also been observing the early polling for the EU referendum. “Interestingly, there are much bigger methodological differences than there were in the general election, and so the telephone and internet polls are telling very different stories,” he adds. “The telephone polls are giving the story that remaining in the EU is the most likely option by a substantial distance, whereas the internet polls have been showing potentially some lead for leaving the EU. This hints at the fact that these samples are very different people. We don't know who’s going to be right until election day, and perhaps the samples will converge as we get towards the election.”

More information about Patrick's research

More information about Will's research

You may also be interested in:

Privacy Settings
Powered by Fruition