Loading…
Close iconClose icon DarkLight mode

Find us quickly

130 Wood Street, London, EC2V 6DL
enquiries@buzzacott.co.uk    T +44 (0)20 7556 1200

Google map screengrab

How is Artificial Intelligence impacting your charity?

Artificial Intelligence (AI) has the potential to revolutionise the charity sector, enabling charities to work efficiently and create a greater impact than ever before. However, despite these opportunities, charities must also notice the risks involved before adopting AI tools.

"AI offers powerful tools, but charities must prioritise balanced integration. AI cannot replace the human connection that builds trust and fosters meaningful relationships. Instead, charities could leverage AI to amplify compassion and strengthen their core mission, ensuring technology serves to enhance, not diminish, the personal touch."

- Gumayel Miah, Director

About the authors

Gumayel Miah

+44 (0)20 7556 1365
miahg@buzzacott.co.uk
LinkedIn

Will Fourie

+44 (0)20 7710 3227
fouriew@buzzacott.co.uk
LinkedIn

"AI offers powerful tools, but charities must prioritise balanced integration. AI cannot replace the human connection that builds trust and fosters meaningful relationships. Instead, charities could leverage AI to amplify compassion and strengthen their core mission, ensuring technology serves to enhance, not diminish, the personal touch."

- Gumayel Miah, Director

The Information Commissioners Office (ICO) defines “AI” as: an umbrella term for a range of algorithm-based technologies that solve complex tasks by carrying out functions that previously required human thinking.

The most widely discussed form of AI currently is 'Generative AI,' encompassing tools capable of producing text, audio, images, or videos in response to user prompts. Such models include OpenAI’s ChatGPT, Microsoft’s Copilot, and DALL-E for image generation. Other types of AI can aid data analysis by identifying trends, making predictions, or sorting data into categories in a revolutionary manner. 

Since OpenAI introduced its GPT software in November 2022, the transformative potential of AI has become a major discussion point within the charity sector. While much of the initial speculation about AI's impact may have been premature, it remains crucial for organisations to consider how AI can enhance their capabilities and the associated risks, balancing these opportunities.

In this insight, we explore the role of AI in the charity sector and examine the unique risks it may pose.

Use of AI in the Charity sector

Use of AI in the Charity sector

The 2024 Charity Digital Skills report revealed that we are still primarily in the experimental stage in terms of finding established and proven applications for this technology. While two-thirds of charities surveyed in the report stated that they believed AI was relevant to their work, only a third listed it as a priority for the coming year.

Where AI has been utilised, its most common uses have included:

  • Developing online content such as digital marketing materials;
  • Automating administrative tasks, for example taking meeting minutes; and
  • Drafting internal documents and reports.

Many charities are interested in utilising AI for their fundraising efforts, with some using AI to help write fundraising bids. However, this may lead to unintended consequences for the charity, as generic or clearly artificially generated bid applications can jeopardise relationships with funders. Conversely, tailored mailing could have the opposite more positive impact.

There is hope that AI can help to directly improve how charities deliver services to their beneficiaries, although the nature of services provided can vary significantly. For example, charities should be aware that most generative AI systems would not have been developed with the charity’s beneficiaries in mind, so any experimentation in this area would require close supervision.

AI risks and strategies for mitigation

AI risks and strategies for mitigation

Despite some of the uses of AI noted above, there is still a reluctance to adopt AI tools in the sector. A 2024 survey by Charity Excellence revealed that although more than 90% of individuals working in the sector have used AI in their personal lives, the number of organisations in the sector utilising it in their operations is slightly lower at 60%. This is understandable, as there are several key risks the use of AI presents, including:

  • Factual accuracy;
  • Potential bias and discrimination;
  • Cost;
  • Data security and regulation; and
  • Sustainability.

Although these risks have been portrayed as entirely distinct and individually addressable, many of the solutions for one risk would also address others. For charities looking to embed AI within their organisation, we would recommend that specific risk assessments are undertaken, and AI policies are developed and introduced accordingly. Some suggested risk mitigation strategies are listed below: 

Factual accuracy: Generative AI systems have a propensity to “hallucinate” - invent facts, sources, or citations. They are trained to give answers which are likely based on algorithms, rather than answers which are accurate. Any use of such systems for research must be closely monitored by individuals who are qualified to judge the factuality of its outputs. 

Potential bias and discrimination: Leading AI models are trained using almost all the content available on the internet. By default, therefore, these models inherently reflect the biases seen on the internet. Charities which use AI to aid decision-making must ensure close oversight to avoid the risk of discriminatory treatment based on existing internet biases. Therefore, careful supervision is essential, with charities working with the vulnerable being particularly exposed to this risk. 

Cost: Upgrading an organisation’s digital infrastructure to support AI can require substantial investment, including the cost of updating data structures and the significant expense of AI licensing. Such investment is generally considered part of a charity’s costs, making it challenging to find funders willing to support it, as questions arise about whether it represents the best use of charitable funds. The use of free AI software would be discouraged given the licensing terms for such software not necessarily being geared up to protect data, leading to a greater risk of breaches.

Data security and regulation: Many charities handle sensitive personal information, and all charities process information which would be internally regarded as confidential. Often, AI models are built to learn from their interactions with users, which in practice means they may store and re-use any data that they were given. This may constitute a breach if the data shared was confidential. As noted above, this risk primarily arises when using free solutions, but charities must still review licensing agreements for paid solutions to ensure there are adequate protections. Some AI models are now being developed outside of the US, such as China’s DeepSeek, so charities should also consider whether they understand where their data is being stored internationally, and whether this would be acceptable to them and their stakeholders.

If a charity decides not to pursue AI applications at an organisational level, it should still consider that staff members may be using such tools in their work. Regardless of whether the charity does not pay for AI software, it must implement clear and sufficient acceptable use policies that do not expose organisational data.

Outside of GDPR, there is also uncertainty about how the use of AI sits within the framework of existing legislation and regulation. While the EU has recently introduced the AI Act to regulate AI the UK identifies AI as a growth area and has deliberately taken a lighter-touch approach. The Charity Commission and ICO are however the most relevant regulators for charities in the UK so they should keep a close eye on guidance from these regulators as it is released. 

Sustainability: Most organisations now understand their responsibilities for promoting sustainability and helping to combat threats such as climate change. In the 2024 Digital Skills Survey, more than half of charities responded that climate change was an important factor for them in choosing digital suppliers. 

There are widespread concerns that AI presents threats to sustainability. Recent research from Goldman Sachs has revealed that it requires ten times as much energy for ChatGPT to respond to a query than it would take to complete a Google search. Additionally, research from the University of California Riverside has warned that global AI demand may require trillions of litres of fresh water by 2027. Charities prioritising sustainability in their operations should weight the effectiveness of AI against these environmental costs.

Public Perspectives

As a final point, we must not forget the privileged position that charities occupy in public life. Most charities rely on public funding, because the public places great trust in the appropriate use of donations. The Charities Aid Foundation has conducted worldwide research to understand how members of the public feel about the use of AI in the sector.

The majority of respondents surveyed believed that the benefits of using AI outweigh the risks, with the most positive responses coming from larger donors.  A wide range of benefits were anticipated, including wider reach and increased efficiency. In terms of negative responses, more than a quarter felt that job losses in the sector were their biggest concern, but more than a third believed that the data security or bias risks previously mentioned were most important. 

The remaining responses highlight another risk: that if charities make too much use of AI, they will lose the “human touch”. The public looks to the charity sector and those working within it for compassion, transparency, and authenticity. If charities lose sight of these principles when pursuing AI, they may find that this trust may diminish.

The exploration of AI within the charity sector reveals a complex landscape of both immense potential and significant challenges. While AI offers opportunities for increased efficiency, expanded reach, and enhanced service delivery, charities must proceed with caution. We are still largely in an experimental phase, requiring careful consideration of ethical implications, data security, potential biases, and environmental impacts. Public trust, built on the responsible use of donor funds and sensitive information, is paramount. Therefore, charities must prioritise robust governance, implement clear acceptable use policies, and engage in strategic board-level discussions. Utilising available resources, such as those provided by the Charity Commission, Zoe Amar Digital, and the Charity Excellence Framework, will be crucial in navigating this evolving technology and ensuring that AI serves to strengthen, rather than undermine, the vital work of charitable organisations.

How can your charity navigate the adoption of AI?

How can your charity navigate the adoption of AI?

The Charity Commission has published a blog to help charities understand AI and how it may impact the decision-making of its trustees. Zoe Amar Digital has also developed a checklist to aid charities in building AI understanding to make informed decisions. 

The checklist, designed to accommodate varying levels of technological expertise, offers guidance for both beginners and advanced users, promoting a tailored approach to AI integration and encouraging strategic board-level discussions. The Charity Excellence Framework has also developed a number of tools to help charities assess risk and develop appropriate responses. You can view the template risk assessment and register here, and a framework policy for AI governance and ethics here.

Get in touch

Get in touch

For professional advice tailored to your unique circumstances, please fill out the form below and one of our experts will be in touch to discuss your requirements and how we can help.

Close iconClose icon backback
Your search for "..."
did not yield any results.
... results for "..."
Search Tags