Measuring effectiveness is a growing priority for charities, but questions abound. David Ainsworth looks at the methodologies and Andy Hillier talks to charities that have put them into practice
In recent years, there has been a sea change in impact measurement by charities, voluntary organisations and social enterprises. Less than a decade ago it was little-used and ill-defined, but now it has greater importance.
Five years ago, Richard Kennedy, head of social investment at Community Action Network, which helps the sector increase its business, studied whether there would ever be a universal system of measuring social value. "At that time, it was pretty disappointing," he says. "But today, it's much more promising."
Anne Kazimirski, deputy head of measurement and evaluation at the think tank and consultancy New Philanthropy Capital, agrees. "We're further ahead than we were," she says. "The question previously was whether evaluation was worth your while; now we're looking at how to make it good."
In recent years, she says, detailed frameworks for measuring impact have sprung up and a wide range of impact measurement tools are available free to charities. The number of specialist agencies in the field has also mushroomed.
Sam Matthews, acting chief executive of Charities Evaluation Services, which helps the sector with demonstrating value and effectiveness, says buy-in from senior staff and funders has increased enormously. "Ten years ago people didn't think it was important, but you'd be hard pressed to find anyone who said that now," she says. "Capacity, not motivation, is now the problem."
She says the changes have been driven not so much by charities as by funders - especially larger ones such as the Big Lottery Fund and Children in Need - demanding more accurate information.
But the nature of impact assessment still varies widely. Kazimirski says different sector leaders have placed different emphases on it and different charitable sub-sectors have made different amounts of progress. "Some sectors, like youth and criminal justice, are further ahead than others because they've had government support," she says.
Kazimirski also says there is much guesswork and approximation and that measuring "soft outcomes" - such as increased happiness or self-confidence - is difficult. "We've got to a stage where we can find a way to measure most things," she says. "But not everyone uses the same method to measure something. In some sectors we can say there is a standard way; in others, there's isn't."
Another difficulty is the number of competing methodologies that can be used to measure social impact: a recent paper by Karen Maas, an assistant professor at the Erasmus University Rotterdam, found 20 different approaches. Most follow a similar process of tracking what a charity has done, measuring what goals it wants to achieve and identifying a method of testing whether its beneficiaries have gained as a result of its actions. But they differ in the weighting they give to different parts of the process and in how the end result should be expressed.
Kennedy says most methodologies are similar. "It's about the emphasis you put on different parts of the process," he says. "We seem to be moving towards a consensus." Perhaps the best-known method, social return on investment, expresses a charity's achievement in cash terms. Others simply say what the charity achieved - for example, how many people reported an improvement in their lives or how many found work.
Social return on investment
Kennedy says SROI tends to divide opinion because it applies a financial value to a social impact. "Some people don't buy it," he says. "But SROI is a great set of principles that is applicable to anyone trying to understand social value. Price is something everyone can understand - it can be used to compare apples and oranges.
"SROI makes it possible to compare charities in different sub-sectors, which is very difficult for impact measurement to do. It can also be dangerous, because you are comparing very different things."
But putting an accurate price on a charity's intervention is problematic, Kennedy says. It might reflect the saving that an intervention makes for the state, but many interventions have a different value to the state than to the people they help.
"There's also the fact that the marginal cost is the thing you can save," he says. "If you reduce a prison population by 10 per cent, you haven't affected the number of prisons."
Kennedy says SROI can allow charities to think they have solved a problem, when in fact it has just been moved further down the road. And it is difficult to work out how much a charity's intervention has influenced a positive outcome.
'The motivation to overvalue'
Caroline Fiennes (right), a consultant and author, says the sector still has much to do to make its assessments more rigorous. "We still see a huge number of terrible impact assessments," she says. "There's still a long way to go."
She highlights the tendency of "pre-post studies", a relatively easy form of measurement that assesses beneficiaries before and after an intervention, to overvalue a programme's effectiveness.
A comparison of pre-post assessments and randomised control trials - a much more rigorous form of analysis - found that pre-post assessments can overvalue the significance of the intervention by up to six times. "The only answer is to have some sort of control," she says. "Otherwise you have no idea what would have happened anyway."
She also questions the usefulness of assessment by charities themselves. "You only have to look at the incentives to realise this is a rubbish way to do it," she says. "There's such a strong motivation to overvalue your impact."
Fiennes says that in the field of health, where impact measurement is well developed, studies show that assessments conducted internally are four times more likely to find positive outcomes than independent ones.
A third issue, she says, is standardisation of outcomes. "Everyone is using totally different metrics," she says. "One of the aims of impact measurement is to find out if your intervention is better than others. That's impossible if you're all measuring different things."
And a fourth problem, she says, is that measurement is often done only when a programme is finished. "You should be setting and testing targets, not outcomes," she says. "Plan what you want to achieve before you measure it - and ask your service users what their targets are."
Counting the cost
Another consideration, according to Kazimirski of New Philanthropy Capital, is that the desire for accuracy has to be balanced with simplicity and cheapness. Accurate data can be costly to produce and she feels it is vital that impact measurement is not the sole preserve of full-time professionals.
"We're big proponents of developing off-the-shelf tools," she says. "Charities can't afford to have a person working full time on assessment, and they can't afford external consultants. We would also promote simplicity - there are plenty of simple ways in which you can collect data."
Matthews of Charities Evaluation Services says funders that want charities to measure impact should help to fund that measurement. "Charities often can't, rather than won't, do it," she says. "It's turning people into quasi-social researchers. Without free support from second-tier organisations, charities will struggle."
IMPACT - A BEGINNER'S GUIDE
Impact measurement attempts to describe what charities achieve - the benefits to their members and service users - rather than merely tracking what they do.
A charity's activity typically consists of an input, such as carrying out a training course, and an output, such as the number of people who attend. In the past, charities have often been measured at this level - they are contracted to hold courses, for example, whether or not beneficiaries want them, or they are contracted to train 100 people, even if those people gain nothing from the training.
But specialists say it's more useful to measure outcomes, which track what beneficiaries gain from an intervention. These can range from an increase in knowledge to using the training they have received to get a job.
There are different ways of translating such outcomes into impact. Some specialists say impact is a measure of how much an outcome can be attributed to a particular intervention. Others see it as a longer-term result - so if an outcome is increased employment among a charity's beneficiaries, then the impacts are their happiness, increased productivity in their town and the decreased cost to the state.
The holy grail of impact measurement is consistency and comparability - a system of standardised metrics and methodologies, where someone reading two impact assessments can be sure that similar weight was given to similar results produced by two different charities, and where there can be an effective comparison of the efficiency of organisations in and across sub-sectors.
Those working in the sector say there is a long way to go. "At the moment social impact measurement is like lights going on gradually in different parts of the sector," says Richard Kennedy of Community Action Network. "Every so often those spots of lights coalesce and you get a bigger patch of light. With a bit of luck and a lot of work, the whole sector will eventually be illuminated."
CASE STUDY 1 - PLACE2BE
Place2Be, which provides counselling in schools to pupils who have experienced problems such as bullying and family breakdown, has measured its impact since it was founded 18 years ago. It asks teachers, parents and children to complete questionnaires about a child's behaviour and feelings before and after receiving support.
The charity also tracks the child's academic attainment and school attendance. The data is reviewed by the charity's research and evaluation team and sent to the Child and Adolescent Mental Health Services Outcome Research Consortium so the results can be benchmarked against similar service providers.
Research by the charity last year showed that 79 per cent of the children it worked with improved psychologically and 60 per cent were more able to concentrate. A study by the consultancy Pro Bono Economics in 2010 showed that its services produced an SROI of at least £6 for every £1 spent.
Catherine Roche, chief operating officer at Place2Be, says it would like to measure its impact further. "We would like to be able to run more randomised controls and do a longitudinal study, but these tend to be expensive and we don't have the money at the moment," she says.
Roche adds that having evidence of its impact has helped Place2Be to secure funding. "It makes us stand out from others that deliver similar services," she says.
Correction - the SROI study was carried out by Place2Be following discussions with one of the founders of Pro Bono Economics, not by Pro Bono Economics itself as the charity initially said.
CASE STUDY 2 - BLACKPOOL ADVOCACY
Blackpool Advocacy receives about 500 referrals a year to its domestic abuse service from the police, social services and others. Since 2008, it has tracked its impact by using Caada Insights, a system created by Co-ordinated Action Against Domestic Abuse, a national charity.
Professionals complete with the victim a checklist to identify whether the individual is a high, medium or standard risk. They also record the severity and escalation of the abuse on a separate form. An additional form records injunctions, such as restraining orders.
When a case is coming to a close, practitioners record on a review form any change in the severity of abuse and how sustainable any reduction in risk is in the longer term. They also ask the victim whether they feel safer and if their quality of life has improved.
The forms are sent to Caada for analysis and comparison with the other domestic violence services that use the system.
Forms were completed for 208 of about 500 people referred to Blackpool Advocacy's domestic abuse service last year. Of these, 80 per cent reported that abuse stopped completely after receiving support, 68 per cent reported that they felt safer and 63 per cent reported that their quality of life had improved.
Dee Conlon, deputy chief executive of the charity, says the system has been useful in identifying trends, such as a fall in the number of gay and lesbian people reporting abuse.
CASE STUDY 3 - ST GILES TRUST
St Giles Trust, a charity that works with former offenders, conducted a study in 2008 and 2009 of its Through the Gates programme, in which more than 1,500 prison leavers received help to return to society without returning to crime.
The London Probation Service gave £1m to the scheme to help former offenders find accommodation, training and education, and to tackle problems such as drug and alcohol misuse.
St Giles and its research partners, Pro Bono Economics and Frontier Economics, tracked 583 people, two-thirds of whom were considered at medium or high risk of reoffending. With permission from the Ministry of Justice, researchers were given access to police data on whether the people in the study reoffended.
A comparison with national reoffending rates showed that those on the programme were 40 per cent less likely to reoffend. The researchers concluded that the £1m invested in the programme had brought about savings of at least £10m, giving it a cost benefit ration of 1 to 10.
Rob Owen, chief executive of St Giles Trust, says: "Charities used to be measured by the size of their halos. What we have done since I've been with the charity is to make sure that everything is properly evidenced.
"For impact measurement to be credible, it needs to be done at reasonable scale, conducted by a highly regarded external organisation - and you need to get access to robust statutory data."