Jean Ellis: Measuring social value - avoiding a methodological straightjacket

The lead consultant at Charities Evaluation Services warns against the sector adopting a 'one-size-fits-all' approach to impact measurement

Jean Ellis
Jean Ellis

Social return on investment (SROI) offers a measure of value expressed in terms of financial benefits related to service or project costs. This is understandably attractive to government and charitable funders looking for savings to the public purse or the best possible return on investment of funds. The method has also been greeted with enthusiasm by some third sector organisations themselves – who have seen an opportunity for using results expressed in monetary terms to their advantage when competing for contracts. While Charities Evaluation Services welcomes SROI as a potentially valuable tool in the evaluation box, we would caution against its application as a methodological straightjacket.

This newly-adopted method seems here to stay for now and we can look forward to further evidence of how useful the methodology proves to the sector. But it’s worth considering to what extent SROI, as an evaluation method, is driven by the context of financial stringency, funding cuts and an overall market-based ethos, rather than by organisational or service needs. 

Third sector monitoring and evaluation trends have long been dominated by government and funding agendas. CES and others supporting evaluation in the sector have sought to work alongside those agendas to develop evaluation practice that is truly useful, providing not only accountability, but feeding back into operational and strategic decision making, illustrating good practice and generating innovation and replication.

Government and funder requirements have created a considerable and positive shift in focus from monitoring organisational outputs to user outcomes over the past 10 years or so. However, many organisations have had real difficulties in developing effective outcomes monitoring systems.

Further, the tension between a government agenda and the complex realities of the contribution of a single organisation to changed outcomes is likely to increase as the third sector is drawn even more into contracts to deliver public services. Organisations anxious to demonstrate how their service contributes to community-wide measures of change often struggle; they may welcome the ability to express benefits in a single overall monetary value as a useful tool in a competitive funding environment.

A regrettable – though not inevitable – side effect of a reinforced emphasis on outcomes is a shift in attention away from evaluation of how things are done. Yet it is of great importance to understand how a programme or project works and what constitutes good practice if evaluation is to be used for learning and replication. It is also important to acknowledge the context in which programmes are implemented, and how these affect the benefit to users.

As well as this current focus on demonstrating social and economic value, there is also a pull towards requiring the sector to use more ‘scientific’ evaluation methods. These approaches – for example using control groups – have long been regarded as the gold standard in public policy evaluation, but they are costly for third sector evaluation and can be fraught with difficulty.

Both economic evaluation and more scientific approaches may also need greater technical skills than are available to internal evaluators. As such, they risk running counter to an important developing trend for monitoring and evaluation as an essential element of everyday work practice and organisational management.

As third sector organisations are driven further into competitive tendering, there is a real risk that attention might focus on a ‘proof’ of value, measured in maximum financial return, to the neglect of the real benefits of broader evaluation approaches. We should also be careful that the sector is not driven to a 'one-size-fits-all' approach, which appears to be favoured in some quarters. We must make sure that complex, externally-driven methods supplement and reinforce, rather than replace, more internally-driven ones.

Jean Ellis is lead consultant at Charities Evaluation Services

Have you registered with us yet?

Register now to enjoy more articles and free email bulletins

Register
Already registered?
Sign in

Before commenting please read our rules for commenting on articles.

If you see a comment you find offensive, you can flag it as inappropriate. In the top right-hand corner of an individual comment, you will see 'flag as inappropriate'. Clicking this prompts us to review the comment. For further information see our rules for commenting on articles.

comments powered by Disqus