One welcome development in Welsh political life and electoral analysis over the last year has been the growth in regular political polling. We now have more regularly reported measures of party preferences and public attitudes. In discussing the results of those polls, one topic that has cropped up quite frequently, both in Comments on blog posts here and also away from the site, has been that of how polling numbers (which report intended vote shares) are translated into possible election outcomes in terms of seats in parliament or the National Assembly. This is a topic that is easily open to some confusion, and one I’d like to discuss here.
A first observation is that current polls should never be seen as a prediction of the next general or Assembly election. For one thing, as Sir Robert Worcester of MORI has often said, “Polls don’t predict; although pollsters sometimes do”. A poll asking about voting intention is a measure of party support now; it is absolutely not a prediction of what it might be at some time in the future. Of course it is true, as Nate Silver has observed, that the closer to the election you get the more confidently you ought be able to predict the final election vote shares from current polls. And polls conducted immediately pre-election should be able to get pretty close to the final outcome: if they don’t, something is probably awry with a pollster’s methods.
What I’d like to spend most of this post looking at, though, is how vote shares from polls are generally translated into potential outcomes in terms of seats: ‘what would happen if these findings were repeated across Wales in an election?’ These seat totals are also not predictions of what will happen, but rather projections of the current position as revealed to us by the polls. But how are such seat numbers generated? And how seriously should we take those numbers?
The method I use for all such projections reported here is that of Uniform National Swing (UNS). This method is also used by many others, such as the UK Polling Report site, and by the BBC. An obvious virtue of UNS is simplicity. To apply it you just compare the percentage support for each party in a given poll with the percentage support they received in the most recent relevant election. The percentage change (or ‘swing’) in support from the last election, whether positive or negative, is then applied uniformly to every constituency (and electoral region for National Assembly elections) in Wales. Repeat the process for every party, and see which party comes out on top in each constituency. Once this is completed, you have a full set of projected results for the whole of Wales. It really is as simple as that.
To illustrate, let’s use the recent BBC/ICM poll. This put Plaid Cymru support for the constituency vote in National Assembly elections at 24% (an unusually high figure in recent years); this compares with the 19.3% that Plaid won on the constituency vote 2011. Applying UNS from the poll to a projected Assembly election therefore means simply working through all forty constituencies and adjusting the Plaid vote upwards by 4.7% (i.e. 24-19.3). The Conservatives were at 19% in the same poll. This compares with the 25.0% that they scored in 2011. So for the Tories, UNS means working through all forty constituencies and adjusting the 2011 result downwards by 6% (i.e. 25.0-19).
For working out a projected general election outcome from a poll, all that needs to be done is to work through all forty constituencies for each party. For the Assembly it is just a little more complicated. I first work out the projected constituency results; I then apply UNS to the regional list vote for each party in each region, with the calculations allocating the list seats taking into account which parties are projected to have won the constituency seats in that region.
So that’s the method. (Try it some time at home; hours of fun for all the family). How good is it? Well, it is clearly not flawless. The most obvious and immediate flaw is that it occasionally projects impossible outcomes. The BBC/ICM poll showed the Liberal Democrats’ constituency vote on 5%. This compares with the 10.6% they won in 2011; UNS would therefore suggest applying a reduction of 5.6% to the Liberal Democrats vote share in all forty constituencies. OK – but what about somewhere like Llanelli, where the Liberal Democrats won only 2.1% in 2011?! Times are tough for the Lib-Dems at the moment, but I’m confident that their vote share hasn’t yet dipped below zero anywhere…
The other major limitation on UNS is that it does not allow for factors that are likely to produce local variations from national swings. We know that such variations exist, and indeed have generally been increasing in size: across the UK, the standard deviation from the national swing has risen. These deviations can be regional, such as we saw in the last Assembly election, where the swing to Labour was notably bigger in the three south Wales regions that in either Mid & West Wales or in North Wales. But deviations can also be particular to a single constituency, due to some specific local issue, to splits in a local party, or simply due to an individual local candidate who is unusually effective (or unusually poor). A common source of at least modest constituency-specific deviations is incumbency: i.e. the sitting member normally does accrue some sort of personal vote. Where a party’s candidate is standing for re-election for the first time there is normally a modest incumbency bonus; conversely, where an incumbent representative stands down and a party is defending a seat with a new candidate, they will typically experience worse than average swings.
UNS doesn’t account for any of these potential sources of variation. Why, then, do people use it? First and most obviously, analysts need some form of simple and neutral formula for projecting from polling numbers to an election outcome. We know that UNS is not perfect, but it is less flawed than any alternatives (for example, see the discussion here of ‘proportionate swing’).
Second, in the aggregate, UNS is normally pretty good in terms of projecting election results from the vote shares won by each party. Thus, in the 2011 National Assembly election there were seven constituency seats where the result differed from that which would have been predicted by UNS changes from 2007-11. Seven out of forty is quite a high proportion. But these local idiosyncracies largely cancelled each other out. Overall, the net differences between the final result and that predicted by UNS were small: the Conservatives won two more constituency seats than UNS would have projected and Labour one; Plaid lost two more than suggested by UNS and the LibDems one.
In my view, UNS provides us with a broad guide – a baseline gauge against which both the overall performance of the parties, and the results of individual seats, can be assessed. But it doesn’t provide us with anything more than that. And projections, using UNS, of polls conducted now are most definitely not any sort of infallible prediction of exactly what will happen at some point in the future. UNS is a perfectly reasonable tool, provided that we understand the limits of its usefulness.