AI has been one of the hottest topics among organizations across all industries for several years. Rightfully so — the tool has become a time-saver and valuable resource, simplifying once time-consuming tasks with the hit of a button. This capability has benefited not-for-profit organizations, which can leverage AI as a teammate to address staff shortages for tasks like written communication, graphic design, fraud detection and data management.
The Prevalence of AI
The statistics don’t lie — organizations across the globe use AI applications daily, with 83% of companies claiming AI is a top priority in their business plans. Additionally, 48% of businesses use AI to utilize big data effectively. The average organization, including not-for-profits, regularly employs multiple AI applications.
Over 35% of all Software as a Service (SaaS) apps leverage AI. Given that a typical organization uses hundreds of SaaS applications (e.g., Chat GPT, Copilot, Salesforce, and many prominent HR and InfoSec tools), many organizations have dozens of AI-enabled applications being used daily.
Lack of Awareness of AI Use
Many not-for-profit organizations use AI but often fail to understand how their employees use it or how their vendors use it with their data. To address this, organizations should proactively ask critical questions:
- Do you understand how your employees use AI, what data it uses, and its purpose?
- Are you aware of how your vendors use AI, what data they share with it, and the purpose of its use?
- Do you know if your vendors use other AI services and how those services might access or utilize your data?
- Do you know if your vendors have AI governance programs to ensure their AI is transparent, ethical, non-biased and compliant with regulations (e.g., EU AI Act, NIST AI Risk Management Framework.)
Without this transparency, not-for-profits may be liable for flaws in the AI it consumes.
Common AI Risks
Many organizations do not fully understand or recognize AI’s risks:
- Hallucinations: In AI, especially natural language processing, “hallucinations” occur when models generate incorrect or nonsensical information. This can be caused by missing data in the training process or limitations in the model's ability to understand the real world. For example, Google has been in the news for recommending using glue to keep cheese on pizza.
- Sensitive data leaks: If AI systems are trained on datasets containing personal information, there's a risk this data could be leaked during training or use, which could have serious consequences for individuals whose data is exposed. In a recent 3Gem study, 30% of employees admitted to posting sensitive customer information to AI tools, 28% shared corporate financial information and 17% shared confidential company information.
- Unexpected visibility: Sometimes, AI poses a risk factor by potentially exposing sensitive or proprietary information. This can lead to data breaches, privacy violations and unintended sharing of confidential details, compromising security and trust. Recently, a client disclosed that Copilot had unexpected visibility into the company's compensation and disclosed that information to anyone who asked for it.
- Bias: AI outputs can lead to unfair or discriminatory outcomes, as the algorithms may reflect and amplify existing prejudices in the training data. The result may be biased decision-making, adversely affecting individuals and perpetuating inequality in areas such as hiring. For example, Amazon abandoned an automated recruiting application when they determined it was biased against women seeking STEM jobs.
These common risks can affect a not-for-profit organization by jeopardizing data security, perpetuating bias, and causing unexpected visibility issues. Let's look at some examples.
- Data Management and Analysis: Employees using AI tools to analyze donor data to identify trends and target potential donors might inadvertently expose sensitive information if the AI tool lacks proper security measures. This could lead to data breaches, violating privacy laws and damaging the organization's reputation.
- Automated Communication: Vendors using AI-powered chatbots to handle donor inquiries and process donations without adequate oversight could result in incorrect or insensitive responses from the AI, harming donor relationships and leading to loss of trust and donations.
- Fraud Detection: Employees relying on AI algorithms to detect and prevent fraud in financial transactions without regular audits and updates may face outdated or biased algorithms failing to identify new types of fraud, leading to economic losses and compliance issues.
- Content Creation: Vendors using AI to generate marketing materials, social media posts, or grant proposals without proper human review might produce content containing inaccuracies, plagiarism, or misalignment with the organization's mission and tone, potentially causing reputational damage.
- Third-Party Data Sharing: Vendors sharing the organization's data with third-party AI services to improve service delivery or analytics can result in a lack of transparency and control over how these third parties use and protect the data, leading to data misuse or breaches and affect the organization's integrity and donor trust.
- HR and Recruitment: Using AI to screen and hire employees without understanding the algorithms used could perpetuate biases and discrimination, leading to unfair hiring practices and potential legal issues. Not-for-profit organizations can take proactive steps to ensure proper AI governance, data protection and alignment with their ethical standards by being aware of these risks.
Get Ahead of AI Governance
Not-for-profit organizations consuming AI can avoid these risks by implementing strong AI governance and protocols. This includes establishing an AI-acceptable use policy, updating vendor risk management processes to identify and validate the use of AI and ensuring vendors effectively manage AI-related risks. By proactively addressing these areas, organizations can safeguard their data, maintain compliance, and enhance stakeholder trust. Not-for-profit organizations developing AI should consider more robust AI governance, ideally aligned with the ISO 42001 standard.
Given the sharp increase in AI adoption, working with cybersecurity experts to develop a mitigation plan is equally essential. Experts can help ensure organizations stay protected and resilient against emerging risks.
At CBIZ, our cybersecurity team works with not-for-profit organizations daily to ensure that cybersecurity, privacy and AI risks are effectively managed. This allows management to focus on growing the organization and fulfilling its mission. Connect with us today to learn more.
Copyright © 2024, CBIZ, Inc. All rights reserved. Contents of this publication may not be reproduced without the express written consent of CBIZ. This publication is distributed with the understanding that CBIZ is not rendering legal, accounting or other professional advice. The reader is advised to contact a tax professional prior to taking any action based upon this information. CBIZ assumes no liability whatsoever in connection with the use of this information and assumes no obligation to inform the reader of any changes in tax laws or other factors that could affect the information contained herein.
CBIZ MHM is the brand name for CBIZ MHM, LLC, a national professional services company providing tax, financial advisory and consulting services to individuals, tax-exempt organizations and a wide range of publicly traded and privately held companies. CBIZ MHM, LLC is a fully owned subsidiary of CBIZ, Inc. (NYSE: CBZ).