This week the British Polling Council published its report into the failure of opinion polls to predict the 2015 General Election outcome. The investigation found that the online and telephone polling techniques overstated support for the Labour party.
Rather than the result being explained in terms of the reluctance of "shy Tories" to admit their support for the Conservatives, instead the survey methods simply failed to provide access to key groups of the population who supported the Conservatives strongly. Their voices went unheard in the polling but not in the ballot box.
So surveys don’t always provide an objective or accurate picture. Who knew? This should ring bells in the third sector. Firstly, research in this field is awash with reports that rely on open-access survey instruments, without controls for the possibility that those who respond have a vested interest in painting a particular picture. The imminent demise of significant numbers of organisations has been regularly announced by some regional infrastructure bodies, based on surveys which are clearly unrepresentative (eg over-reporting from large organisations) yet most organisations, in most places, continue in existence. So let’s be clear about the basis on which samples are constructed and to take account of this in the analysis, eg through appropriate weighting.
Second, what about non-respondents? In this field we rarely if ever read discussions of differences between those who responded to all the questions on a survey, and those who ignored some of them. Consider the national indicator surveys of 2008 and 2010: on the key survey question about the influence of statutory bodies on the third sector, one-fifth of respondents declared that they didn’t know or had no opinion. It was much higher than that for some types of organisation in some regions, a pattern repeated for other questions, but the implications for the merits of the survey received little or no discussion.
If sector researchers are going to do surveys, they need to be more robust and larger. Small sample sizes will lead to results of limited value. There are many free-to-use calculators out there which explain what is required. Let’s say that you want to do a survey in a local authority with 1000 voluntary organisations in order to estimate a proportion, eg the percentage of organisations that receive public funding. You would need 516 respondents to estimate this proportion within +/- 3 per cent. How many surveys of the third sector do you know that have anywhere near such numbers? There’s a recent survey of a local authority with around 170 respondents: on a question where respondents are divided equally, that means the true figure lies somewhere between about 43 and 57 per cent, so unless the answer is outside those limits, you can’t be confident which way the balance of opinion lies.
A cheaper and more robust method would be large-scale tracking of charity reports and accounts. You could get a long way with quantitative analyses of these. TSRC worked with NCVO to develop a dataset capable of tracking nearly 100,000 charities since the mid-1990s. There are challenges in managing and analysing a dataset on this scale but investment in that sort of data, plus the use of open data on funding streams, would save funders and infrastructure bodies a lot of wasted investment in studies which have cathartic value but have little use beyond that.
John Mohan is director of the Third Sector Research Centre