Uncategorized

‘Where Did It All Go So Wrong?’

In addition to the Labour party and the Liberal Democrats, the clear loser from the 2015 general election seems to be the opinion polling industry. Throughout 2015 the polls had apparently been pointing to a very close general election, with a hung parliament a near-certainty. Yet, in the end, the Conservatives finished almost seven percentage points way ahead of Labour in the popular vote, and managed to secure an overall parliamentary majority. Something seems to have gone rather wrong.

The British Polling Council, a body to which all of the main polling companies belong, has already announced an enquiry. But before we can enquire into why things went wrong, we have to be clear about the nature of the error. In Scotland, the polls got it absolutely right in predicting an SNP landslide, even if they did not get the percentage vote-shares won by each of the parties exactly correct.

So what about in Wales? Here we only had one regular polling company publishing in the build up to the election, YouGov. (ICM have done some Welsh polls over recent years, but regrettably none in the election campaign itself). So we are really evaluating the record of one poll – the Welsh Political Barometer, on which I have worked with ITV-Wales and YouGov for the last eighteen months. So how well did we do?

The final Barometer poll was released last Wednesday. Sampling by YouGov was done on the last three days of the election campaign, to ensure the data was gathered as late as possible before voting began. The poll asked two voting intention questions. The first was a standard, general question:

“The general election is this Thursday, 7th May, which party will you vote for?”

This was then followed up with a second question:

“Thinking specifically about your own constituency and the candidates who are likely to stand there, which party’s candidate do you think you will vote for in your own constituency this Thursday?”

Results from these two questions were then weighted by respondents’ answers to a third question, on likelihood to vote (“The general election will be held this week. On a scale of 0 (certain NOT to vote) to 10 (absolutely certain to vote), how likely are you to vote in the general election?”).

The table below gives three sets of figures. The first column of numbers is the actual election result, to one decimal place, for each of the main parties. The next column gives the results produced by the Barometer poll for the ‘generic’ voting intention question. The final column gives the Barometer results for the ‘constituency-specific’ voting intention question. Finally, at the bottom of the two columns is a measure of how ‘wrong’ the poll was – the mean error. This is calculated by measuring, for each of the six main parties, the gap between the election result and the poll figure for vote share, and then working out the average of this error.

 

Party Result ‘Generic’ % ‘Constituency’ %
Labour 36.9 38 39
Conservative 27.2 26 25
UKIP 13.6 13 12
Plaid Cymru 12.1 12 13
Liberal Democrats 6.5 7 8
Greens 2.6 4 2
MEAN ERROR 0.82 1.48

 

Bearing in mind that such polls have a ‘margin of error’ of approximately 3% either way, we can see that on both questions the poll performed pretty well. Every one of the six parties’ final vote share was estimated within that 3% margin on both questions. However, the generic question clearly performed better for every party except the Greens, and got all of the six parties within 1.4% of the final outcome. The average error under this question was less than a single percentage point. This is a very strong performance.

Rather unfortunately, however, those generic figures were not the ones that were reported by YouGov last Wednesday as their final ‘headline’ numbers when we published the poll last Wednesday afternoon. Instead the figures that were highlighted were the constituency-specific ones. Why? In the (typically honest) words of Laurence Janta-Lipinski, Associate Director for Political and Social Research at YouGov, who has worked with us for several years on Welsh polling:

“[I]n terms of why we used the constituency questions, we made a very poor judgement call. There was patchy evidence in the last election cycle which suggested the constituency question would be more accurate; in 2009, the main voting intention showed the Lib Dems losing all their seats in their heartlands where the constituency question suggested they would hold on to them, which in fact they did in 2010. In 2015, we were seeing a similar pattern and therefore decided that the best course of action was to use constituency VI again this time. In reality, the reverse appeared to be true and…we got it wrong. It’s not just Wales, the standard question would have been better across the board.

 

It clearly was unfortunate that we did not use the generic numbers, which really were very accurate – although they still slightly under-stated the Conservative position and over-stated that of Labour.

This experience raises a broader question about the conduct of polls that have asked both generic and then constituency-specific questions. Such was the approach of Lord Ashcroft’s constituency polls, for example, which tended to give particular prominence to the constituency-specific numbers when assessing the results. In the end, however, for most constituencies polled by Lord Ashcroft, as also with our Barometer poll, it seems that the numbers produced by the generic question again came rather closer to the final election outcome than those from the question that prompted people to think about their specific constituency. I’ll be reflecting on the experience of the Ashcroft constituency polls in Wales in a later blog post.

The main message, though, is that I think we need to refine slightly our understanding of where the polls went wrong in Britain in 2015. They didn’t go much wrong at all in Scotland. Nor were they very wide of the mark in Wales. It was in England that the polls had much more serious problems. I look forward eagerly to the findings of the British Polling Council enquiry as to quite what those problems were.

Comments

  • James

    Very interesting point re generic VI vs constituency VI.

    There was a similar situation in Scotland. Survation’s ‘eve of poll’ poll almost got the result spot on with generic VI, but it’s “ballot paper” VI was badly off with SNP and Labour. Although oddly it’s shares with Con and Lib Dem were more accurate, perhaps because it picked up a Tory -> Lib Dem (anti-SNP) tactical shift in the seats Lib Dems were (unsuccessfully) defending.

    Actual VI: SNP 50.0, Lab 24.3, Con 14.9, Lib 7.5

    Standard VI: SNP 48.9, Lab 24.8, Con 15.5, Lib 5.9

    Ballot paper VI: SNP 45.9, Lab 25.8, Con 15,0, Lib 7.1

    http://survation.com/wp-content/uploads/2015/05/Final-Daily-Record-May-Tables-1c6d6h0.pdf

    • Roger Scully

      Thanks, James. As I said, the use of constituency-specific questions is something that will need to be looked at more generally. Sometimes they do seem to work better, but in this election they mostly didn’t. We need to try to understand why.

  • J.Jones

    Remember that the Plaid sponsored Yougov poll had one more striking error in their constituency specific poll; Plaid apparently supported by 15% against 13% in the generic and 12.1% actual.

    • Roger Scully

      Indeed, Jon – though this poll happened some days before the election, so the difference might conceivably have been down to a late swing. However, the fact that even the final poll had this question over-stating Plaid support a bit suggests that it was not the best measure of party support.

Leave a Reply

Your email address will not be published. Required fields are marked *