If a week is a long time in politics, then one might wonder what twelve months in the polling industry equates to? It was only this time last year that ICM produced four consecutive 5-point leads for Labour in our monthly Guardian polls, leading us to fend off accusations that our polling methods were too cautious, and failed to cater for subtle but ‘true’ movements in public opinion.
Now, the shill cries are in reverse. Our poll yesterday showed a neck-and-neck horse race, turning last month’s 7-point Labour lead into dust. “Rogue”, they cry! Methods don’t work! If something looks odd it’s probably wrong!
For some, the fact we don’t prompt for UKIP in the way we do for Labour, Conservatives and the Liberal Democrats understates the party share. Yet only two months ago our Guardian poll set the highest score that UKIP had yet received in any poll (but was quickly revised upwards by others). Now that we find for the second month running that the polish is coming off the UKIP deck we are criticised for handing back to the Tories chunks of the share they lost.
But during this whole time our methods have not changed. They have not changed in any substantive way since prior to the 2010 General Election, and indeed are very close to those ICM first employed in the 1997 General Election when our founder, the genius Nick Sparrow, reinvented the way to poll after the debacle of 1992 when most polls predicted that Neil Kinnock was going to be Prime Minister one day before John Major actually was.
Unusually, some of these cries now emanate from our polling competitors. There are valid accusations that can be made of any poll, of course. Peter Kellner, President of YouGov, has one for us: our sample size is not big enough (1,000 while Yougov poll a daily 2,000 people). Bigger is better without doubt; few would disagree. But rare is the Political Editor who has not commissioned polls of 1,000 people (or less), a number most are entirely comfortable with – and certainly there is little evidence to suggest that the primary indicator of poll accuracy is not clever method, but sample size. If there were such evidence, perhaps we should look to Yougov’s final poll for the Sun before the 2010 General Election, which was comprised of a whopping 6,483 people. Alas, the average error it produced was nearly double that of ICM’s meagre-sized effort. Let us also note Yougov’s final prediction before the AV referendum in 2011 contained 5,725 people – surely enough to get it right? Nope. ICM hit the bulls-eye that day with no error at all, the most accurate prediction in British opinion polling history; Yougov had a bad day at the office with 8% error.
Peter’s other point is that the frequency of the daily poll means that YouGov’s occasional polling chaff is easily sorted from its prodigious wheat, while ICM's cannot be so, given little more than monthly polling that ICM conducts. YouGov is a hugely successful company whose innovation has shaped the modern polling environment, and polling every day no doubt sets the political narrative. But just because something happens every day, does not mean it can’t be wrong every day. All pollsters have different methods and outside of that one moment when our work is proved right or not-so-right, it’s likely that if a methodological flaw is present once it will be present every time a poll is conducted. To be clear, I’m not saying that YouGov’s methods are flawed, or that ICM's are perfect, but trusting mid-political cycle polls just because they are big, regular and typically consistent is not proof of anything at all.
There is little point in getting into tedious methodological debate about the relative merits of telephone versus online polling. Perhaps it’s better to rely on the evidence. There’s only one thing that consumes most pollsters – producing the most accurate final pre-election prediction of a General Election result - the Holy Grail for those of us fortunate enough to work in this business. On that critical measure – yes, even in those elections when ICM was allegedly ‘out of line’ a furlong out from the finishing post – it was ICM that produced the most accurate result on three of the last four occasions (1997, 2001, 2010) according to the British Polling Council’s independent adjudication, each poll on behalf of The Guardian. On that one where we missed out in 2005, we were pretty close too, beaten by the glorious hit-the-bullseye performance of GfK NOP. Let us also note that the 2010 General Election was not seen to be a particularly good one for any of the online pollsters.
Much like a financial advisor would say, past performance may not be a good indication of future results, the rise of UKIP could well be make or break for pollsters this time around and few people are sure how to deal with the emergent party. But our methods have produced spot-on predictions of the total ‘Others’ share of the vote in each of the last four elections, and until somebody can definitively show us that such success will be unlikely come the 2015 General Election - which they can’t - only the foolhardy would abandon methods that have worked so well and for so long.
We are looking for RM or SRE for 12mth from Jan '15. #Quant exp. in tech research http://t.co/ySMPsm31Aq #mrx #vacancies #recruitment