Suppose you hear of a new intervention that's never been tried or tested before. What are the chances of it producing decent results? Clearly, you've no idea: the uncertainty about its results is sky high. Now suppose there's an intervention with teenage mothers that a well-conducted, rigorous evaluation in the US suggests reduces child abuse and neglect, reduces child injuries by 20-50 per cent and improves children's educational outcomes. Will it produce those results in, say, Edinburgh?
Well, the uncertainty is lower now. But it's not nil. In fact, the UK results for this intervention - it's called the Nurse Family Partnership - were much less impressive, largely because the "counterfactual" (what would have happened anyway) in the US for teenage mums is that they don't get much support, whereas in the UK they get quite a bit, through the NHS.
Now, suppose that an intervention has been evaluated near you some time ago. There is still some uncertainty because contexts change over time - services pop up or die off, the political climate changes, wars start and thousands of refugees arrive surprisingly suddenly.
We could do this all day. Suppose you have evaluated the intervention in your own context very recently. Even then, there's uncertainty if you run it again because external factors might have changed - although the more recently you have run it, the lower the uncertainty.
Finally, even a well-run trial doesn't eliminate uncertainty. If, for example, in a nice, rigorous evaluation you see a 15 per cent rise in back-to-work rates among people on a programme relative to people not on that programme, you still cannot be absolutely certain the rise was due to the programme. A small sample size is the usual culprit. Maybe your evaluation took 100 people, and randomly chose half to do the programme and half not to - but maybe, by chance, you got motivated people on the programme and pretty unmotivated people not on it. That might skew the result because you wouldn't know whether the rise was due to the characteristics of the participants or to the programme itself.
Does all this mean evaluation is pointless? Absolutely not. It's the quest for certainty that is pointless - and mistaken.
People sometimes talk of charities "proving their impact". This is dangerous and misleading. Proof is a big concept and in social science - which is what impact research is - it is almost never found. Scientific investigations rarely prove things - disproving things is much easier - but they nonetheless help us understand how things work and hence make educated decisions. They inform our judgement by reducing uncertainty - but not to nil.
As the astronomer and science writer Carl Sagan wrote: "Science is much more than a body of knowledge. It is a way of thinking ... It counsels us to carry alternative hypotheses." We should always be alive to the possibility that something else might be going on. We're not looking to prove our impact, but should obsess about using the accumulated knowledge (in German, the word for knowledge, Wissenschaft, also means science) to maximise our chances of making a significant impact.
Caroline Fiennes is director of Giving Evidence and author of It Ain't What You Give