Atomik Research

The Creative Market Research People

Read All About It

Archive

Polls vs. Exit Polls: Why did so many get it so wrong?

maggie-simpson-uk-map

As party leaders’ heads roll following one of the most shocking nights in British political history, many sleep-deprived voters are clutching their fourteenth coffee of the day and trying to get their minds around how we ended-up here.

With initial polls indicating the closest general election for a generation, predicting that you wouldn’t be able to fit a slice of bacon between Ed Miliband’s Labour and the incumbent Conservatives, a UK map which looks more like Maggie Simpson than a country (see above) looked highly unlikely.

And yet the Conservatives stormed to an unexpected majority, turning England and Wales blue as seemingly everyone north of the border (predictably) bathed Scotland in SNP yellow (something the Lib Dems must be looking at with teary-eyed envy).

So what actually happened? Why did research that was taken directly from the voters themselves give such skewed indications of how the UK planned to recruit its next government?

The simple fact is that the way you conduct any kind of poll or research is about much more than ‘yes’ or ‘no’ questions and number of respondents, and any kind of miscalculation or misjudgement can leave you with numbers and conclusions which simply don’t add up.

Firstly, the difference between a poll and an exit poll is extremely important. As you probably are already aware, an exit poll is taken straight after respondents have cast their votes, while initial polls can be taken well before the ballot has been cast, which can lead to massive discrepancies.

So when the first exit poll was released last night, showing figures so radically different to those circulating just 24 hours before, poor Paddy Ashdown was so confident in their inaccuracy that he promised to eat his own hat live on air if they came to pass (a hat which had its own Twitter account within the hour). Sadly for Lord Ashdown, his Liberal Democrats actually finished on fewer seats than predicted by the exit polls, prompting calls for him to grab a bib and tuck into a fedora sandwich.

There are two main issues with comparing polls and exit polls. Firstly and most obviously, people can easily change their minds in the run-up to polling day (if they couldn’t, then there wouldn’t be a great deal of point in campaigning). But also there’s the issue of whether the candidate you say you’ll vote for in public is actually who you’ll put an ‘X’ next to when nobody’s watching.

We saw this ahead of last year’s referendum on Scottish Independence, when many initial surveys suggested a neck-and-neck race. But when the votes were counted, the ‘No’ vote strolled to a 55%-45% win, an almost unthinkable majority in a two-choice vote. In the aftermath, many commentators speculated that, while many voters outwardly espoused the idea of a strong, independent Scotland, when left to their own devices, the idea of free university education and keeping the pound felt like a safer bet.

Has a Conservative-led government which has overseen a steady economic recovery meant that many of the coalition-bashers were actually shy Tories just waiting for polling day to strengthen the status quo? It wouldn’t be the first time it’s happened.

But it isn’t just when you run a survey that can affect the outcome, who you actually ask and how often can influence the results as much as anything. And after weeks of daily polls which have failed to reflect the proportion of red, blue and yellow ties sitting the House of Commons, it is worth asking whose views were being broadcast.

The simple fact is that a smaller sample size produces less reliable results, while a repeated survey of a similar group of people can’t possibly give you the full picture of a nation’s attitudes or changing opinions. It’s a bit like asking ten people in a pub what their favourite sport is on the same night as the local snooker club’s big evening out: it doesn’t mean that 90% of the UK love Ronnie O’Sullivan.

It may sound obvious, but the way to get the most accurate survey and poll results, particularly on a nation level, is to ask as many people from as many demographics and geographical locations as humanly possible. This is the only way to minimise the effect of unavoidable factors like shy Tories, floating voters and regional discrepancies.

Also, never overlook the concept of allowing people to tick ‘other’.

Almost every poll published saw the voting numbers add up to 100% of the electorate, either forcing floating voters to make a decision as they answered the survey or simply filtering them out altogether.

The decision on who to vote for at the 2015 General Election was incredibly difficult for millions of voters, and a huge amount of swing votes will almost certainly have been decided in the final days and hours before polling.

So how can we possibly have a true reflection of how the country feels if we either push the ‘Undecideds’ into a camp that they don’t belong in or discount them altogether? People can make up their minds very quickly, and they can change them even faster.

To put it in very basic terms, electoral polling, like any market or consumer research is a very delicate balancing act. Even the slightest oversight can leave you with egg (or hat crumbs) all over your face, so there are no substitutes for experience, common sense, expertise and respect and understanding of the people you’re asking.

But, to be honest, we could have told you that years ago. Ho-hum.


  1. No comments yet