Goldman Sachs and impact reporting: who are the muppets now?

In an online commentary, Caroline Fiennes, the director of Giving Evidence, talks about the investment bank's corporate philanthropy efforts

Caroline Fiennes
Caroline Fiennes

The legendary investment bank Goldman Sachs was described by Rolling Stone magazine two years ago as being "like a great vampire squid wrapped around the face of humanity"; and a former executive who resigned very publicly last month via a New York Times article, revealed that some of it staff refer contemptuously to clients as "muppets".

But what about its achievements in corporate philanthropy? Well, it has run an astonishingly effective philanthropic initiative on immunisation – and made, in another project, some spectacularly spurious claims about charitable impact.

First, the immunisation. Great corporate philanthropy involves using a company's unique resources to create benefits that only that company could have generated. Goldman Sachs provided a small group of bankers on a pro bono basis to create the innovative International Finance Facility for Immunisation.

This takes numerous governments’ financial commitments to health and, by issuing bonds secured against them, makes the money available up front. This enables better planning, which accelerates research and development, reduces vaccine prices and speeds delivery. Vaccinations enabled by the bond are expected to have immunised half a billion children, fully three million of whose lives are thought to have been saved by the IFFIm alone.

But then, by contrast, there’s the Goldman Sachs programme called 10,000 Women, which supports female entrepreneurs. It takes out full-page adverts in magazines to share some data that it presumably thinks should impress us: "70 per cent of [the programme's] graduates surveyed have increased their revenues, and 50 per cent have added new jobs."

In my view, this is the worst type of 'impact reporting', because it tells us precisely nothing.

To understand charities’ impact we need to answer two questions: first, what happened?; and second, how is that different from what would have happened anyway? The data that Goldman Sachs gives here fails to answer either question. It’s an error common to many charities' impact reports.

What happened? The data doesn’t even show whether the performance of the women on the programme improved. Perhaps they were doing just the same beforehand – perhaps they were doing better before and the programme dulled their skills. At the very minimum, charities should report not just ‘after’ data (as Goldman Sachs is doing here) but 'before' and 'after data', so we get some sense of the change.

What would have happened otherwise? Even if those women's performance has improved, has it improved more than it would have done otherwise? Again, we've no idea because there's no control or comparator. Perhaps all businesses have grown that much – or perhaps others that didn’t do the programme have grown more. A better statement would be "70 per cent of graduates increased their revenues, whereas only 20 per cent of other businesses did in the same period". The "other businesses" here would be acting as a control group. This is very basic statistics.

In fact, the control group in the 70 per cent/20 per cent example still wouldn’t prove much. It’s not hard to imagine that the kind of women who get themselves onto a Goldman Sachs programme are just the kind of go-getters who would do well in virtually any circumstance. This selection bias means that we don’t know whether the results are due to the programme itself or to systematically unusual characteristics in the women it selects.

The only way to control for selection bias is an experiment in which a researcher takes a large enough set of female entrepreneurs who are eligible for the programme and randomly assigns them either to do the programme or to not do it. Then it compares what happens to the two groups’ performance over time. This latter set is a control group, which will – if the experiment is done right – show what the women who did the programme would have achieved otherwise. Voila. 

Selection bias is also common in charities' impact reports. Randomised control trials of the type described are, happily, increasingly commonly used to figure out what is really working: the Education Endowment Fund is using them in UK education, and both Innovations for Poverty Action and J-PAL (the Abdul Jameel Poverty Action Lab) use them for alleviating extreme poverty.

Selection bias, controls and randomisation are standard tools in the statistician’s box. The masters of the universe should make better use of them.

Caroline Fiennes is the director of Giving Evidence, and author of It Ain't What You Give, It's The Way That You Give It: Making Charitable Donations That Get Results, which was published last week. Buy it at a discount from

Have you registered with us yet?

Register now to enjoy more articles and free email bulletins

Already registered?
Sign in
RSS Feed

Third Sector Insight

Sponsored webcasts, surveys and expert reports from Third Sector partners

Third Sector Logo

Get our bulletins. Read more articles. Join a growing community of Third Sector professionals

Register now