Menu
Top Ten AI Usage Policy Considerations for Nonprofits
Holly Peterson, Esq., Counsel
Tenenbaum Law Group PLLC
April 24, 2026
April 24, 2026
Artificial intelligence (AI) is rapidly transforming nonprofit business operations. AI provides great promise with respect to enhancing efficiency, optimizing workflows, analyzing data, and boosting productivity. At the same time, AI presents various legal, ethical, reputational, and other risks. A sound AI usage policy can enable a nonprofit organization to leverage the promise of AI without compromising work product integrity, disclosing privileged, confidential, or proprietary information, eroding its mission, jeopardizing its reputation, or subjecting the nonprofit to undue legal exposure. Leaders in the nonprofit community should consider the following practical advice when developing and implementing an AI usage policy for their organizations.
- Build out a process to monitor and correct inaccuracies.
While AI has shown a tremendous ability to quickly generate helpful outputs, AI also is notorious for sometimes generating inaccurate information. To guard against inaccuracies, require everyone who utilizes AI on behalf of the nonprofit organization to harbor the requisite expertise to generate the work product themselves so that they are able to evaluate and refine output and attest to the quality, accuracy, and integrity of the final work product.
- Require human authorship.
Normally, nonprofit employees who generate work products in the ordinary course of business convey copyright to the nonprofit organization under the “work-made-for-hire” doctrine. Volunteers often assign or license copyright to nonprofit organizations for which they volunteer. The U.S. Copyright Office and U.S. courts have consistently taken the position that purely AI-generated material without meaningful human creative input is not copyrightable. What that means is that if a nonprofit employee or volunteer utilizes AI to generate a work product, the work product will not be subject to copyright protection unless there is more than a de minimis imprint of human authorship. As such, make clear through your policy that while AI can be leveraged as a suitable “consultant,” copyrightable work product ultimately requires human authorship.
- Implement guardrails for privileged, confidential, and proprietary information.
Large language models crowd-source information to generate output. AI tools do not care about the source of the information. For that reason, nonprofits must carefully protect privileged, confidential, and proprietary information. Implement guardrails for when, if ever, such information can be inputted into an AI tool. With only very limited exceptions, Board and committee meeting minutes, financial information, confidential employee, donor, or member information, privileged communications with counsel, and other proprietary or confidential information should never be inputted into an AI tool, especially a free AI tool. If you need to manipulate or otherwise leverage AI tools in connection with privileged, confidential, or proprietary information, consider purchasing rights to and approving employee use of specific closed AI tools so that you can leverage AI capabilities without risking unwanted disclosure. In all instances, implement guardrails around employees and volunteers utilizing free or unapproved AI tools to mitigate risks that accompany unwary acceptance of click-wrap agreements, which inevitably confer broad rights to AI platforms to utilize, learn from, and disseminate inputted content – all of which can put the nonprofit’s sensitive information at great risk.
- Communicate policy expectations to employees and volunteers.
Employees and volunteers may have different levels of sophistication, comfort, and risk tolerance utilizing various AI tools on the market. Make sure to clearly communicate expectations so that all employees and volunteers understand the parameters of approved and prohibited AI use.
- Do not allow AI to obscure mission, erode brand, or eclipse what makes the organization unique.
AI privileges conformity by crowd-sourcing large amounts of information. Imagine that you ask AI to generate a membership recruitment video based on your organization’s website. While the AI tool will almost certainly embed snippets of tailored content, it will inevitably preference conformity by describing “a dynamic organization of industry professionals” with promised opportunities to “share knowledge and network with colleagues.” Catchy music (the kind you can imagine in your head right now) will play against the backdrop of a diverse group of business professionals smiling, shaking hands, and networking. Generic-feeling work products can feel empty to mission-driven organizations. Consistent with the expectation that AI work products must bear more than a de minimis imprint of human authorship, build expectations around mission alignment and brand management into your AI policy to make sure that your organization’s imprint is seen, heard, and felt.
- Do not allow AI to stifle innovation.
AI generates output based on existing content. In some instances, that can be helpful. For example, if you would like to build an itinerary to explore famous museums in Florence, an AI trip-builder may be helpful, as the famous museums are already in existence. In other instances – especially in the scholarly context – AI’s reliance on existing information can be problematic. Imagine, for example, an AI peer review tool that is evaluating a scholarly article on a novel theory. It may encourage the author to consult existing literature at odds with the novel theory, as opposed to critically and thoughtfully evaluating the new concept as a potentially valuable contribution to the scholarly cannon. Take care to evaluate how AI may be useful or detrimental in different contexts.
- Review work product for discrimination, tort liability, and other areas of legal exposure.
AI does not evaluate content for discrimination, defamation, breaches of privacy, intellectual property misappropriation, federal False Claims Act compliance, or other torts, to name a few legal claims. Regrettably, because AI sources from such a large volume of unvetted information, it inevitably consults tortious, biased, and inaccurate content; images that embed protected copyrights and trademarks; and other problematic content. This can create exposure for a nonprofit organization that utilizes AI-generated resources. To mitigate exposure, make sure that human reviewers evaluate all AI-generated work products for discriminatory, defamatory, infringing, tortious, and other legally problematic content. When utilizing AI in higher risk areas, such as when generating proposals or reports for federal grants, or when utilizing personally identifiable information that is otherwise regulated under federal, state, or international data privacy laws, take special care to evaluate output against any legal obligations. Similarly, if it is determined that bias or discrimination has influenced a process, decision, or work product in any way – such as if AI is used as a screening and evaluative tool in the workplace hiring process – take all necessary measures to remedy the bias or discrimination to ensure the integrity and lawfulness of the process, decision, or work product.
- Embed a clause in third-party vendor, consulting, and other independent contractor agreements requiring compliance with the organization’s AI usage policy.
Independent contractors may or may not be bound by organizational policies. Include a provision in all agreements with independent contractors that requires contracting parties to abide by the organization’s AI usage policy when providing products and services to the organization. In so doing, transfer all risk to the vendor or consultant for any material breach of the provision, including through indemnification. Similarly, for unpaid speakers, authors, Board and committee members, and other volunteer leaders, while indemnity would be atypical and heavy handed in most situations, all agreements (including participation forms) with such individuals should require adherence to the organization’s AI usage policy.
- Do not allow AI to substitute for human relationships.
AI cannot substitute for human relationships, which is of paramount importance for membership associations and other nonprofit organizations. Efficiency, automation, and other benefits of AI must be evaluated against the erosion of human interaction and its impact on the organization’s mission and community. For instance, an automated AI-generated email may be infuriating to a dedicated volunteer who craves the human touchpoints that association membership provides. Leverage AI sparingly when it is being implemented to supplant functions that previously functioned through human engagement.
- Stay up to date on the evolving law.
The law surrounding AI and its usage is evolving rapidly. Ten years ago, we were barely talking about AI. Today, as of the date this article was written, 38 states have enacted at least one law governing AI, and all 50 states have introduced various bills to regulate AI use. There also is intense pressure on federal policymakers in Washington, DC to regulate AI, which has not happened to date. Laws and regulations are being enacted at a time when we are still exploring AI’s full potential and confronting some of AI’s greatest risks. We are far from the final iteration of a comprehensive legal and regulatory scheme governing AI. With that in mind, any policy you adopt must be nimble enough to evolve with the law.
For more information, please contact the author at hpeterson@TenenbaumLegal.com.







