A few months ago Joe Saxton, founder of and driver of ideas at nfpSynergy, wrote a fascinating blog challenging our sector’s enthusiasm for randomised control trials. But it prompted consideration of the question one step removed – before we look at charities borrowing the tools of science, is it appropriate to compare the charitable and scientific worlds?
Randomised control trials are the current highest point of evidence collection in the scientific method; science learns from itself and from its mistakes, around the globe and across the centuries.
The scientific method demands that results be replicable and expects that an important experiment will be run by entirely different people time and again. Scientific rivals and successors pore over datasets, read failed experiments and negative results, and perfect techniques.
This is not the only difference between the two spheres. Science and technology are wedded more firmly to economic progress. Shareholders, government, hospital directors and the public take note of developments such as new drugs or stem cell breakthroughs. There is nowhere near this level of awareness for charity practitioners because we have no such support structure or history of sharing.
In the corporate world – certainly in the boardrooms of pharmaceutical and tech companies – directors are inquisitive and acquisitive. They are aware of all their competitors’ projects, of what newcomers try and of the innovations that spring up in far-flung corners. They have teams of researchers comparing clusters of studies and meta-analyses to scope opportunity, and are supported by academia, the business media and worldwide networks.
Of course, there are many collaborative efforts in the charity sector, from The Good Exchange to the concept of "generous leadership", and from joint initiatives between funder organisations and umbrella bodies to local projects in the same town or village. One of our ambitions at Charity Futures is to compile a directory of these, listing free and paid resources on charity academia, leadership and governance training, and emergency support. We hope that with signposting of both collaborations and smaller ventures, even more efficient partnerships can be forged. This could grow in utility by adding neutral reviews and learning aids, so new board members could easily discover the many tools that are available to help them.
The other difference between the science and voluntary worlds is simply that a lot of charities do not operate in a manner that shows quantifiable results. The goal of some is to enable a sport to be played, or to make the lives of people in a particular group more tolerable, even enjoyable. There are sector activities that suit social science measurements, such as helping ex-prisoners to reintegrate or educating children, but a host of important charitable activities are little to do with numbers. Has enough thought been devoted to testing an ethical component? Are quality-adjusted life years enough?
Trying to get a picture of impact by asking beneficiaries to rate their experiences feels like missing the point, even were methodologies sound enough to be compared across location and type – which they aren’t. Some of the largest management consultancies have been trying for years to set out a standardised system to rate charity effectiveness, and each model sinks on its flaws. What the voluntary sector does brilliantly is to use hard scientific evidence in campaigning – against smoking near children, for example – and to fund investigation of this type. But that does not contribute to a central corpus on how charities themselves campaign.
The question of randomised control trials speaks to the charity bubble’s current focus on transparency and impact. You get the sense that many charity leaders believe if we could only display our accounts and give hard numbers on how many people we’re helping, the public and press would return to treating all charities as angelic. This is a limp hope. Few people have the time or inclination to check the accounts and annual statements that charities painstakingly polish. Even if they do, they might not have the statistical grounding to make informed decisions.
Another difference charities might be more happy about is that the sector is far less regulated: the Charity Commission comes down hard on some, but it pales in comparison with pharmaceutical watchdogs, for example. The commission does not review every new project, grant or intervention that a charity plans, not even very large experiments.
Try as we might, we cannot create a ready-made academic milieu for the voluntary sector, with the centuries of history, the international network of journals, the expectation of challenge, refinement and peer review. Multi-institutional multinational collaborations do not spring up overnight, but after years of relationship-building, sharing of techniques and ethics, of agreeing shared goals. But this is certainly a goal to have in mind: through thought leadership, debate, seminars and working with the university sector across disciplines, we must strive to introduce higher standards of intellectual rigour and collective progress.
The full version of this article can be found at Bubb's Blog
Sir Stephen Bubb is director of Charity Futures. Jonathan Lindsell is the research & programme manager of Charity Futures