• EU referendum tracker

    Latest poll – methodological change
    The methodological debate between online and telephone polling on the EU referendum continues apace, but to date ICM Unlimited has, I think, tried to be even handed about both data collection methodologies. Our simultaneous mode test on behalf of The Guardian last week (and last month) not only proved that we have a foot in both camps, but that the discrepancy in findings remains prescient even when all post-fieldwork weighting and adjustment techniques are applied in an identical manner.
    For the record, last weekend’s two ICM/Guardian polls (fieldwork dates: 13th-15th May 2016) resulted in the following outcomes:

      Telephone Online
    Conservative 36% 34%
    Labour 34% 32%
    Liberal Democrat 7% 7%
    UKIP 13% 17%
    Green 4% 4%
    SNP 4% 5%
    Plaid Cymru 1% *%
    Other 1% 1%
         
    Remain In the EU 47% 43%
    Leave the EU 39% 47%
    Don’t know 14% 10%

    The poll(s) as published were demographically and 2015 past vote weighted, and adjusted in line with ICM’s standard post-2015 General Election mechanism.
    Since the last election’s polling miss, and indeed prior to it, a prolonged and thorough investigation of our methods has taken place. More work has been undertaken on our online processes, given our view that the future of telephone polling is somewhat bleak, for reasons both linked to its own ability to produce representative samples in an age when the public has radically redefined its relationship with the telephone (landline in particular) and because the cost and practicality of telephone polling is problematic for vote intention work.
    A number of new methodological strings to the ICM bow have been tested, all with moderate but consistent impact. However, these methods were developed with the intention of improving the accuracy of Westminster vote intentions rather than European referendum outcomes. In an ideal world, their application would and should improve accuracy for both, but it cannot yet be assumed that such a happy eventuality would, in fact, be the outcome. As a result of unease about their potential power – and unintended negative consequences elsewhere – the decision has been made to only introduce one such innovation at this time.

    Theoretical base
    In their British Election Study paper Mellor & Prosser[1] analysed ‘first’ contacts vs ‘all contacts’ vote intention outcomes in the BES random probability face-to-face survey. They found that after demographic weighting, the first contacted set of respondents (n=714) produced a vote prediction containing the same errors that all 2015 pre-election poll contained – particularly an over-statement of the Labour share, and the UKIP share. Once all contacts had been added in (n=2955) i.e all those people who were less easy to reach, the error all but disappeared.
    Our analysis of our online omnibus surveys in April and May 2015 suggests that the same sort of phenomenon is present in the online context. Too many easy to reach (i.e our online equivalent of first contacts) people appear to support UKIP and Britain leaving the European Union, and their views only begin to be offset later on Friday night and through Saturday once more people are at home or have access to a computer. In other words, making the survey process more accessible to more members of our panel is likely to reduce the chance of politically imbalanced samples.

    The change: identifying the issue
    Vote intention and EU referendum polling undertaken by ICM has exclusively relied upon our own online omnibus, conducted at weekends. We also have a regular midweek online omnibus, but we have not tested political questions on it (until w/c 17th May) in order to help inform our decision. On this basis we are only able to speak about the nature of our own online panel, NewVista. While our expectation is that the discoveries made by us would also likely affect other online panels, we cannot be sure, and we have no intention of criticising our competitors by implication.
    In the launch of our online omnibus, a few hundred email invites are released in order to test the process and the questionnaire (soft launch). Our policy has then been to release a sufficient number of invites to secure the full representative sample of 2,000 members of the adult public.
    This appears to have affected the political profiles/attitudes of online samples we achieved. Interviews tend to build up quickly on each Friday night, probably because certain types of people are more readily available and willing to participate. Indeed, there is a remarkable consistency across our online polls, with big Leave leads being built up in each hour from 4pm to 9pm on a Friday, partially mitigated by big Remain In leads every hour thereafter until the survey closes, ostensibly by Monday morning for data delivery to clients.
    We believe it likely that the weight of interviews generated before 9pm on a Friday has the effect of consolidating a Leave lead as a result of the survey process itself – demographic quota cells fill up and ‘close’ once the target number has been hit. If a specific cell, such as 65+ men, is filled early with people disproportionately likely to support Leave, no additional 65+ men will subsequently be allowed on the survey. As a result, interviews with 65+ men are unlikely to be politically or attitudinally representative of all such 65+ men even though in demographic terms they are identical. But they are not, and their presence possibly introduces a small skew into in favour of Leave (or UKIP, depending on the question looked at).

    The change: process
    As a result of our observation, the way in which ICM releases email invites has changed (as of w/c 23rd May). Only a ‘soft’ launch will take place on a Friday night, and full launch will be staggered across the weekend, making Saturday and Sunday much more of an option for all respondents to complete the survey.
    Our intention is not to prevent certain segments of our panel from participating on the survey. Rather, it is to maximise the opportunity for all panel members to participate by keeping the survey open over more of the weekend.

    The change: weighting
    Weighting has become a default mechanism for polling adjustments, seen as a panacea for all sorts of ills. It is not; it can partially mitigate observed problems but rarely eliminate them entirely – only avoiding the problem in the first place can solve polling riddles.
    However, it is unlikely that process change outlined above will solve the problem other than partially. Respondents more inclined to Brexit may be equally fast to respond to their invite at other times during the weekend, thus still affecting the data but less overtly. As a consequence we are overlaying a new weighting scheme to reflect the profile of response by quickness to participate.
    We will not publish full technical details of this weighting scheme, for fear of conditioning its power. However, we will be applying a “time of response weight” to reflect disparity in response between early responders and late responders. The net effect of this weight, so far, has been to reduce the Leave share by up to 2-points, with a corresponding increase in the Remain share by up to 2-points. It is entirely possible that the strength and direction of this weighting effect will change, if the pattern of response changes on any individual survey.

     

    [1] Missing Non-Voters and Mis-weighted Samples: Explaining the 2015 Great British Polling Miss

  • EU referendum tracker

    With the referendum now less than three months away, ICM is moving on to more of a war footing, this week introducing both political registration and turnout modelling into our weekly tracker data.

    While more consistent with how we would ordinarily conduct vote intention polls, we should point out that the turnout modelling remains pretty basic, employing a standard (for want of a better phrase) turnout weight based on the traditional 1-10 scale question. So, if someone says they are 10 out of 10 certain to vote, they are taken at face value. People with a lower likelihood, are similarly given a factor directly linked to their answer. Note, unlike previous General Election turnout modelling we are not halving the turnout factor if individual respondents failed to vote at the last General Election.

    Firstly, the question of turnout is the hot topic of the moment, but the response is pretty orthodox. The proportion saying they are 10/10 certain to vote (prior to the 2015 General Election, probably the best simplistic indicator of actual turnout, but deemed in these parts to be higher than what the actual turnout is likely to be) is 63%, 2-points higher than the 10/10 score on this poll for a GE held ‘tomorrow’.

    The impact on EU intentions is also unexciting, but intuitive. After excluding the non-voters and the DKs, the top line numbers move from 52% vs 48% in favour of Remaining In, to 51% vs 49%. An alternative model of turnout that employs geo-demographic modelling to predict different turnout chances for specific socio-economic groups (on this occasion) does not impact on the headline numbers, shown below:

    Remain In 44%
    Leave 43%
    DK 13%

    Note that numbers of Don’t Knows fall significantly from an average of 18% to 13%. In effect, in previous polls a chunk of people who said “Don’t know” were actually “Won’t Vote”. This partially explains the wide discrepancy on DKs between phone and online polls, but alas, does not explain the difference in headline numbers between the two data collection methodologies.

  • EU Referendum Tracker

    Battle is joined and the main players have lined up behind their cause. The Project Fear instigated by the Prime Minister appears pervasive, and the Leave camps still await official designation status, while being attacked on all fronts. Surely a time to see some polling separation?

    Nope.

    In the 25th poll in our weekly series, the two opposing groups stand neck and neck, both sitting on 41% with 18% undecided, translating into a fairly obvious 50:50 once DKs are stripped out.

    We might wonder what will happen once turnout and more aggressive technical weighting schemes in ICM’s (sorry, still private) possession will do to topline numbers? Here’s a clue: if sampling issues affect all voting intention polls on an equal footing, largely to the benefit of Labour, then observers might think that such schemes will be designed to downplay the impact of voters disproportionately unlikely to vote, even if they say on a poll they will – i.e. Labour voters. Our weekly polls continuously suggest that Labour voters fall in the rough ratio of 2:1 for Remaining In. Please play that tape to the end.

    However, let’s be clear that toying with different technical mechanisms is a serious and dangerous game, and no honest pollster would say they are fully comfortable or certain of the veracity of various evolving weapons in their armoury. We are still reacting to issues which are deeply embedded in our processes, and we should remember that telephone polls and online polls say pretty different things on this subject – something that I’m certain is linked to online polls carrying too many UKIP voters, and telephone polls carrying too many Labour voters.

    Sir Bob Worcester often used to say that polls don’t predict but pollsters do. Not sure I ever fully agreed, but here’s my prediction: the ‘truth’ right now is probably somewhere in between what telephone polls and online polls are saying.