Caroline Fiennes: Presenting assessments of a charity's performance doesn't necessarily increase donations

Research on how scientific rigour affects donations shows mixed results, according to our columnist

Caroline Fiennes
Caroline Fiennes

There has been a huge rise in interest recently in the impact charities have, so it's remarkable that only now are we seeing rigorous evidence emerging about whether donors actually care. It's a mixed picture.

A paper published last year reported on an experiment at a US charity, Freedom From Hunger. It divided its donor list into two random groups. Those in one group received a conventional solicitation with an emotional appeal and a personal story of a beneficiary, with a final paragraph suggesting that FFH had helped that beneficiary. Those in the other group received a letter identical in all respects – except that the final paragraph stated (truthfully) that "rigorous scientific methodologies" had shown the positive impact of FFH's work.

Donations were barely affected. The mention or omission of scientific rigour had no effect at all on whether someone donated. It also had only a tiny effect on the total amount raised. People who had supported that charity infrequently were not swayed. However, people who had previously given a lot – more than $100 – were prompted by the material on effectiveness to increase their gifts by an average of $12.98 more than those in the control group. On the downside, people who had previously made frequent gifts of less than $100 became less likely to give and also shrank their average gifts by $0.81 – all told, the net effect was about nil. But on the upside, this implies that more serious donors will give more if they are presented with decent evidence of effectiveness.

A separate study in Kentucky looked at whether donors give more when there is an independent assessment of the charity's quality. Donors were each approached about one charity from a list; each charity had been given a three or four-star rating (out of four) by the information company Charity Navigator. Half the donors were shown the rating; the other half weren't. The presence of the ratings made no meaningful difference to their responses.

The third study has not yet been published, but is perhaps the most telling. It was a multi-arm, randomised, controlled test in which a large number of US donors each received appeals from one charity out of a set of charities that had various Charity Navigator ratings. Half of the appeals included the charity's rating; the other half did not.

The overall effect of presenting the information was to reduce donations. Showing the ratings brought no more benefit to the high-rated charities than not showing them. For charities with a rating of less than four stars, showing the rating reduced donations; and the lower the rating, the more it reduced donations.

Donors appeared to use evidence of effectiveness as they would a hygiene factor: they seemed to expect all charities to have four-star ratings, and reduced donations when they were disappointed – but never increased them because they were never positively surprised.

Three swallows don't make a summer, of course, so there's much more to know about donor behaviour. Even if it transpires that donors really don't care, our constituents do – hence, so must we.

Caroline Fiennes is director of Giving Evidence and author of It Ain't What You Give

Before commenting please read our rules for commenting on articles.

If you see a comment you find offensive, you can flag it as inappropriate. In the top right-hand corner of an individual comment, you will see 'flag as inappropriate'. Clicking this prompts us to review the comment. For further information see our rules for commenting on articles.

comments powered by Disqus