Latest poll – methodological change
The methodological debate between online and telephone polling on the EU referendum continues apace, but to date ICM Unlimited has, I think, tried to be even handed about both data collection methodologies. Our simultaneous mode test on behalf of The Guardian last week (and last month) not only proved that we have a foot in both camps, but that the discrepancy in findings remains prescient even when all post-fieldwork weighting and adjustment techniques are applied in an identical manner.
For the record, last weekend’s two ICM/Guardian polls (fieldwork dates: 13th-15th May 2016) resulted in the following outcomes:
|Remain In the EU
|Leave the EU
The poll(s) as published were demographically and 2015 past vote weighted, and adjusted in line with ICM’s standard post-2015 General Election mechanism.
Since the last election’s polling miss, and indeed prior to it, a prolonged and thorough investigation of our methods has taken place. More work has been undertaken on our online processes, given our view that the future of telephone polling is somewhat bleak, for reasons both linked to its own ability to produce representative samples in an age when the public has radically redefined its relationship with the telephone (landline in particular) and because the cost and practicality of telephone polling is problematic for vote intention work.
A number of new methodological strings to the ICM bow have been tested, all with moderate but consistent impact. However, these methods were developed with the intention of improving the accuracy of Westminster vote intentions rather than European referendum outcomes. In an ideal world, their application would and should improve accuracy for both, but it cannot yet be assumed that such a happy eventuality would, in fact, be the outcome. As a result of unease about their potential power – and unintended negative consequences elsewhere – the decision has been made to only introduce one such innovation at this time.
In their British Election Study paper Mellor & Prosser analysed ‘first’ contacts vs ‘all contacts’ vote intention outcomes in the BES random probability face-to-face survey. They found that after demographic weighting, the first contacted set of respondents (n=714) produced a vote prediction containing the same errors that all 2015 pre-election poll contained – particularly an over-statement of the Labour share, and the UKIP share. Once all contacts had been added in (n=2955) i.e all those people who were less easy to reach, the error all but disappeared.
Our analysis of our online omnibus surveys in April and May 2015 suggests that the same sort of phenomenon is present in the online context. Too many easy to reach (i.e our online equivalent of first contacts) people appear to support UKIP and Britain leaving the European Union, and their views only begin to be offset later on Friday night and through Saturday once more people are at home or have access to a computer. In other words, making the survey process more accessible to more members of our panel is likely to reduce the chance of politically imbalanced samples.
The change: identifying the issue
Vote intention and EU referendum polling undertaken by ICM has exclusively relied upon our own online omnibus, conducted at weekends. We also have a regular midweek online omnibus, but we have not tested political questions on it (until w/c 17th May) in order to help inform our decision. On this basis we are only able to speak about the nature of our own online panel, NewVista. While our expectation is that the discoveries made by us would also likely affect other online panels, we cannot be sure, and we have no intention of criticising our competitors by implication.
In the launch of our online omnibus, a few hundred email invites are released in order to test the process and the questionnaire (soft launch). Our policy has then been to release a sufficient number of invites to secure the full representative sample of 2,000 members of the adult public.
This appears to have affected the political profiles/attitudes of online samples we achieved. Interviews tend to build up quickly on each Friday night, probably because certain types of people are more readily available and willing to participate. Indeed, there is a remarkable consistency across our online polls, with big Leave leads being built up in each hour from 4pm to 9pm on a Friday, partially mitigated by big Remain In leads every hour thereafter until the survey closes, ostensibly by Monday morning for data delivery to clients.
We believe it likely that the weight of interviews generated before 9pm on a Friday has the effect of consolidating a Leave lead as a result of the survey process itself – demographic quota cells fill up and ‘close’ once the target number has been hit. If a specific cell, such as 65+ men, is filled early with people disproportionately likely to support Leave, no additional 65+ men will subsequently be allowed on the survey. As a result, interviews with 65+ men are unlikely to be politically or attitudinally representative of all such 65+ men even though in demographic terms they are identical. But they are not, and their presence possibly introduces a small skew into in favour of Leave (or UKIP, depending on the question looked at).
The change: process
As a result of our observation, the way in which ICM releases email invites has changed (as of w/c 23rd May). Only a ‘soft’ launch will take place on a Friday night, and full launch will be staggered across the weekend, making Saturday and Sunday much more of an option for all respondents to complete the survey.
Our intention is not to prevent certain segments of our panel from participating on the survey. Rather, it is to maximise the opportunity for all panel members to participate by keeping the survey open over more of the weekend.
The change: weighting
Weighting has become a default mechanism for polling adjustments, seen as a panacea for all sorts of ills. It is not; it can partially mitigate observed problems but rarely eliminate them entirely – only avoiding the problem in the first place can solve polling riddles.
However, it is unlikely that process change outlined above will solve the problem other than partially. Respondents more inclined to Brexit may be equally fast to respond to their invite at other times during the weekend, thus still affecting the data but less overtly. As a consequence we are overlaying a new weighting scheme to reflect the profile of response by quickness to participate.
We will not publish full technical details of this weighting scheme, for fear of conditioning its power. However, we will be applying a “time of response weight” to reflect disparity in response between early responders and late responders. The net effect of this weight, so far, has been to reduce the Leave share by up to 2-points, with a corresponding increase in the Remain share by up to 2-points. It is entirely possible that the strength and direction of this weighting effect will change, if the pattern of response changes on any individual survey.
 Missing Non-Voters and Mis-weighted Samples: Explaining the 2015 Great British Polling Miss