Presenting about ethics a few years back, someone challenged me on the use of behavioural science/nudge techniques in fundraising.
His concern was that people could be manipulated into giving without making a conscious choice to give.
It was ironic that only the previous day, the same person had been cheering the fundraiser and activist Dan Pallotta for championing the idea that charities should behave exactly like businesses. And one of the things commercial marketers do a lot is use nudge methods to get people buy stuff.
Irony aside, that doesn’t mean this person was wrong to challenge the ethics of nudging in fundraising and charitable giving. Yet hardly anyone does (here’s a Google Scholar link for relevant articles – the two at the top both oppose nudging as an infringement of donors’ autonomy).
As often happens in fundraising, whenever we encounter a new idea from outside of our sector, we are so preoccupied with how we can use it, we rarely stop to think about how we ought to use it – or, indeed, if we ought to use it at all.
Something similar is happening as the fundraising profession gets to grips with artificial intelligence.
This is something my organisation, the fundraising think tank Rogare, is looking at through a project led by the American fundraising consultant Cherian Koshy, who is doing as much thinking on this issue as anyone (and probably more).
Cherian, who I'd like to thank for his input into this piece, characterises the debate as dominated by ‘facile’ discussions about how to use AI.
Sure, if it is to be used, it should be used ethically. But that’s where our ethical discussion seems to be focused – on the practicalities of using AI in fundraising – with people latching on to all its perceived benefits (including the ethical ones) without considering unintended negative outcomes.
For example, last year, a Canadian charity proclaimed that it had solved the poverty porn issue by using AI-generated images of its beneficiaries. This is not to take anything away from this charity for trying to address such a thorny ethical challenge.
But, following Cherian’s insight, it was a superficial treatment that focused on one ring-fenced ethical challenge (how to represent beneficiaries) without considering the myriad knock-on issues.
One is that real people are excluded from telling their own stories (not just an issue related to AI ethics); while any perceptions of othering, saviourism and stereotyping remain, irrespective of whether the image is generated by AI.
A further ethical issue concerns bias.
AI must draw on existing information. But that might be distorted by biases, such as the depiction of beneficiaries, or heavily skewed, such as the demographic make-up of donor bases.
That can reinforce stereotypical representations, or lead to the exclusion of more diverse demographic groups from philanthropy.
AI is aware of (some of) its limitations. Ask ChatGPT what the ethical issues are in using AI in fundraising, and it’ll list factors that include data, authenticity, bias, exploitation and access.
But these are generic concerns that relate to many contexts and aren’t specifically addressed to ethical implications from its use in fundraising, which is what Cherian’s team is doing at Rogare (and some blogs about non-profit AI ethics appear to rehash AI-generated content, so be warned).
I’ve asked ChatGPT many questions about fundraising ethics/ethical fundraising.
What I take from its answers is that it has a limited and simplistic understanding of this matter that’s based on complying with the code of practice and donor-centred concerns.
AI has never offered any information about donor power/privilege and donor dominance unless prompted to do so by asking a specific question.
For ChatGPT to tell you about these things, you need to already know about them.
The only time it’s ever told me about community-centric fundraising – which is a major challenger to the donor-centred ethical orthodoxy – is in response to the question ‘What is community-centric fundraising?’. But even so, it didn’t really understand what CCF is.
As the use of AI grows, it is vital that we incorporate it into fundraising’s ethics and regulation. (The Fundraising Regulator’s recently-announced consultation into the code of practice is clear that it wants to include AI within its regulatory remit.)
Doing this will require more thinking than is currently being devoted to the matter.
One of the biggest challenges to overcome is that AI doesn’t yet know enough about the ethics of fundraising to be able to recognise its own limitations in advising on what ethical fundraising is, nor the ethical dilemmas that come with using AI in fundraising.
Ian MacQuillin is director of the fundraising think tank Rogare