Impact measurement. Two little words that have been embraced by some, but which can seem awfully cold and sterile to charities thinking about how to find out if what they do works.
It’s first worth stating why measuring impact is so important, as obvious as it might seem. Charities are in the business of improving people’s lives – whether that’s reducing reoffending among young offenders or encouraging more people to recycle. They strive to improve the effectiveness of what they do. This is only possible if you have an idea of the impact of your existing services.
Second, we need to better utilise what works in improving people’s lives. Innovation is hugely important, but reinventing the wheel is wasteful. When charities are designing services, they should start with what people have learned before. We’ll improve what we do as a sector only if we make more use of what has been found to work elsewhere. There are some great free online resources that pool this collective knowledge. For example, the Blueprints database captures knowledge about what is proven to work in reducing youth offending. The Social Research Unit will be launching a European version of this database later on this year, which will be looking at all areas of children’s outcomes. The Institute for Effective Education’s Best Evidence Encyclopaedia summarises what we know about improving children’s education outcomes.
Last, even the best-intentioned services can cause harm. One example is a programme called Scared Straight, developed in the US but also used here in the UK. It tries to deter young people at risk of offending by taking them into prisons to meet hardened criminals. It has been found to actually increase the likelihood that young people offend.
So impact measurement is important; now on to the myths.
Myth 1: It is impossible to measure ‘soft outcomes’ Rigorous measurement tools actually exist for the vast majority of outcomes. For example, a charity hoping to improve children’s behaviour could use the Strengths and Difficulties Questionnaire to measure its impact – a scientific tool that takes the form of a five-minute questionnaire and is available in many languages.
Myth 2: It’s expensive There are, of course, resource implications and people might feel we are better off spending that money on providing services. But building evaluation into what we do will help make services more cost-effective in the long run. At the Social Research Unit, we are testing ways to do rigorous evaluation that carry fewer up-front costs.
Myth 3: All types of impact measurement are as useful as each other This is not true: a badly designed evaluation will tell you very little and there are plenty of consultants out there peddling these kinds of design.
Myth 4: Some types of impact measurement, most notably the ‘gold standard’ of randomised controlled trials, are unethical because they involve allowing some people to access a service and not others But the truth is that this happens all the time in services. We rarely make something available to everyone who might be eligible for it at the same time, whether that be a new parenting programme or a new type of drug. Because we might be doing harm, we should find out before we roll out – it would be unethical not to do so.
But debunking the myths alone will not lead to across-the-board impact measurement. Subjecting what you do to rigorous impact evaluation is not easy. It involves a level of scrutiny that does not always lead to the answers we hope for. It is the right thing to do, but it is also a brave thing to do, which is why we should congratulate those charities that have begun the journey. At the SRU we are working directly with charities to help them take their smart ideas to proven programmes, and we are in the process of developing a free online app that will allow charities to self-assess the quality of their impact measurement systems.
We also need stronger commitment from government to find out what works, both in terms of funding charities to evaluate as well as deliver services, and in terms of policymaking. Too often, impact measurement is the first thing to go, either as a result of political expediency when the evidence doesn’t fit with a politician’s pet project, or of cutting costs. Last month, the White House sent a memo to all departments and agencies in the US setting out clear expectations from the President on measuring impact and using evidence. We need that same level of political buy-in here.
Sonia Sodha is head of policy and strategy at the Social Research Unit at Dartington