Artificial Intelligence (“AI”) is THE technology of the moment…and will likely be for the coming decade. How can and should nonprofits employ AI safely, responsibly, and productively to empower employees and to benefit programs, services, operations, and fundraising? This article, co-authored by Plan A Advisors with Professor Lauri Goldkind of the Graduate School of Social Service at Fordham University, serves as a primer for nonprofit professionals interested in using AI to increase organizational efficiency and effectiveness. The article uses the term LLM (Large Language Model) rather than AI to best reflect the kind of AI that most nonprofits would employ.
What Is Artificial Intelligence?
Given the ballooning combination of hype, consternation, and concern in the news, you might think that AI dropped from the sky. But scientists and programmers have been working on developing machine Artificial Intelligence and its applications since the 1950s and 60s. The AI most of us are likely engaging with in our daily jobs is a Large Language Model (LLM). LLMs are designed to synthesize massive quantities of available data and produce “new” content that answers a question or follows a directive.
Large Language Models, a type of Generative AI, captured the public’s excitement in late 2022 when OpenAI, a privately held software company released ChatGPT. What makes it different from a regular Google search or software command is an LLM’s interactive nature; LLMs are “generative” so they will answer or respond differently time after time as you refine your question or command, and as it recalls prior “conversations” and can iterate time over time. You can interact quite intuitively with an LLM without learning how to speak computer science. Like in normal conversation, you can ask clarifying questions or refine your request as you assess the LLM’s outputs.
But there has to be a human in the loop.
Harms. LLMs are great for routine, repetitive tasks like writing thank you notes or creating boilerplate program descriptions, but nonprofits should be cautious about introducing LLMs into their workflows without considering the potential harms. LLMs are a sophisticated technology built by humans who bring their own unconscious bias to the table when developing code. In addition, LLMS are “trained” on data that codify the structural biases found in any society. For example, LLMs famously mis-gender job titles, including reporting that all construction workers are men and all kindergarten teachers are women.
Hallucinations. Hallucinations are the cute name given the wrong answers that an LLM can potentially deliver. LLMs work by probabilistically linking sequences of words together. Sometimes, a model can link the wrong set of words! For example, when the New York City Mayor’s Office released a chatbot delivering fake information on the City’s small business regulations.
Hope. While there are risks associated with making use of an LLM, the upside potential is undeniable – especially for those in the nonprofit sector who are under-resourced and under-staffed. Using an LLM like a very quick intern can be a huge boost for productivity and efficiency, making time for the mission work on which staff want to focus. Lower stakes tasks – meaning those that don’t involve client data or personally identified information – called “PII” in the AI world – can easily be automated or sped up.
Bring a discerning eye to anything that AI generates for you. Check for bias. Double check for factualness. Just like you would a capable intern’s work.
Successive editions of this series will explore specific workplace applications for AI and suggest an approach to workplace policy.
Development, Communications & H.R.
LLMs can be readily used for a wide range of applications in your nonprofit’s Development or Advancement Office, Marketing & Communications Department, and for Human Resources. For example, an LLM can draft a “low stakes” document such as a donor acknowledgement or even a program description in seconds.
Donor Relations. Use an LLM to compose funding requests, donor acknowledgements, pledge payment reminders, grant proposals, funder reports, event invites, and more. Some of the paid LLMs even have graphic design capabilities.
Marketing & Communications. Use an LLM to draft newsletter articles, social media posts, program descriptions, CEO speeches, and press releases. You can also use AI to help design a logo.
Human Resources. In the HR arena, LLMs can be useful across your entire workflow drafting position descriptions, interview questions and rejection letters, and even reviewing resumes. In addition to common recruitment and hiring tasks, AI can support the production of compliance materials like employee handbooks and social media policies; it can also create PowerPoint decks for professional development, training, and case studies.
Refining prompts. The instructions humans give the LLM to generate a response are called “prompts.” Your prompts can be refined and revised over time leading to more customized results, doing a better job with each iteration. For example, if your agency’s style guide calls for a conversational tone in Development and Communications materials, you might prompt a model by typing: “Assume you are a Director of Development in a Museum of Folk Art and Quilting. Use a conversational, friendly tone to create a 50-word thank you note.” The more specifics included in the prompt the better.
Evaluation & Analysis
LLMs offer tools for evaluation and analysis, and the visual representation of data. For instance, an LLM, like ChatGPT or Gemini can summarize the themes in qualitative data such as open text responses, interview data, or audio recordings. They can also perform statistical analysis on quantitative data from spreadsheets, data tables, and other sources.
LLMs need a human fact checker to make sure results are realistic and accurate, but they can cut down on analysis time dramatically. For instance, you might produce the graphs, charts, and key quotes for your annual report in as little as an afternoon – something that would take far longer without LLM support.
Qualitative. Use an LLM to develop surveys and interview questions to gather feedback and evaluate the impact of a program or service, an exhibition or performance, a course or a procedure. How are users experiencing what you offer? Participant interview data can help you shape your programs, and also offer rich stories to funders and other stakeholders. An LLM can analyze the data you’ve collected and summarize it all in a report. An LLM can enhance the qualitative project cycle from start to finish by creating interview questions and prompts, transcribing recorded interviews, and doing thematic analysis on the text data. You can then prompt the LLM to offer the most compelling quotes from a range of stakeholder perspectives. For example, an LLM can be asked to give you the three most compelling quotes for a donor who is interested in a specific population or service area.
Quantitative. Use an LLM for reporting purposes to analyze any numerical data you’ve collected and create charts and graphs that depict trends and reflect the impetus for them using data visualization tools. AI can also suggest relevant external data sets that can help a nonprofit tell the story of its impact. How successful is your nonprofit in reaching a proportion of a desired client population? How effective is your organization in addressing an issue relative to others of its size and kind? You can ask AI to prepare a heat map to demonstrate your reach relative to your competition, or the relationship between the availability of a set of services and the socio-economic status of the population you serve. You can also use the LLM to recognize patterns in your data that reflect behaviors, like a deep dive into your donor database to reveal shifts over time in the ways that people give in response to economic conditions or the ways they are solicited.
AI: Market Research
LLMs can be used effectively for market research and delivering results well beyond what a typical search engine (e.g., Google) can offer. They can draw from a nearly infinite universe of available information and synthesize and organize large volumes of data into manageable and useful results. You can train your LLM on a “gated” set of data so it is pulling only from data you want it to use, not from all the data it finds.
An LLM can also develop the structure for gathering, analyzing, and presenting the data it finds. For instance, if you were interested in creating a youth arts program as an extension of a community outreach effort at a local museum, you could ask an LLM to provide twenty of the best programs of this type across the country based on number of museum employees and budget size. The model can take the twenty responses and categorize them by type of population served, programmatic focus, and whether or not a program takes place in the museum or offsite in a school setting.
Lists. LLMs can generate lists such as names of prospective donors who have a demonstrated interest in your type of programmatic activities or foundations that support related causes; media outlets and advertising platforms for publicity purposes; subject experts, consultants or prospective vendors for purchases (e.g., HVAC repair, search firms, web designers, lobbyists…); and bibliographies on any subject.
Templates and Summaries. LLMs can design the template that you use to gather and analyze information, like a standard donor profile or evaluation tool. LLMs can summarize findings on a particular topic, or a person. Then, the LLM can fill in the template with summarized information in response to your request – for example, building a prospect profile of an individual you might be considering for a board position, or a family you are about to solicit. LLMs may find key information your Development Office might be missing.
Landscape Analysis. LLMs can identify a group of entities that share characteristics and rank their relative influence or stature. Say you are looking to enter a new market with a program or service and want to know with which nonprofits you might compete, and something about their leadership and funding.
Best Practices. LLMs can report on the relative effectiveness of various approaches to a problem or challenge and give evidence to support its conclusions. For example, you can ask for five evidence-based practices that are proven to change behaviors around DEI culture.
Risk Mitigation. LLMs can analyze the risks associated with a program or financial decision and suggest the liabilities that might be considered as part of a (human) decision-making process. For instance, when choosing a contractor for local childcare needs, you could ask a model to generate risk scenarios such as, how much staffing capacity, what education level or conditions does such a program need to be successful.
Itineraries. LLMs can build a schedule for a trip or a conference and identify transportation or venue options. For instance, you might ask a model to compare the pros and cons of two smaller cities, like St. Louis and Kansas City on amenities such as restaurants, nonprofit employees, enrichment activities, and proximity to university campuses to decide between two host locations for your next conference.
Program Plans. AI can develop the solution to a problem, like designing a new program to meet a demonstrated need, building on examples and in response to parameters around cost, location, and scheduling.
AI: Policy & Practices
In many ways, it’s the “wild west” when it comes to how LLMs are evolving and how they are and can be used. Nonprofits are wise to get ahead of their use and application.
LLM Vendors. Choose a respected provider from among the many options. Open AI’s ChatGPT is among the most recognizable brand, but there is also Google’s Gemini, Anthropic’s Claude, Meta’s Llama and others. If you license an “enterprise version” of your selected vendor, it can become a standardized part of workflows across departments, similar to other enterprise systems like your CRM, Quickbooks, or Microsoft Office. Only enterprise versions of LLMs are connected to real-time data making them particularly accurate and fast. Using subscription-based tools also decreases the likelihood of receiving a hallucination (or made up) response. Make sure the provider’s functionality meets your nonprofit’s projected needs. For instance, if you only want to offer staff text-based queries, a free version might suffice. However, for more sophisticated users, interested in things like uploading datasets and graphic design capabilities, the enterprise version is worth serious consideration.
Bundling. For more complex research projects, several LLMs can be “bundled” to work together. Each is given its own distinct prompt, including one that synthesizes the results of several others.
Training. Offer training to staff on AI use and its limitations so they understand how a question should be developed (“prompt engineering”), what response to anticipate, and how to evaluate its appropriateness and accuracy. A local consultant might offer training. You might share examples from various departments of their success in employing AI.
Use Policy. Establish a policy for your nonprofit that sets the rules on how AI can (and shouldn’t) be used, and publish and communicate that policy in training and regular communication. Of course, it’s likely that your employees are already using AI in various ways, so acknowledging and empowering them and sharing these new practices can be more effective than scolding or limiting.
Discernment. A knowledgeable human is essential to the formulation of appropriate prompts, and a discerning human needs to carefully review the ‘product’ to confirm its veracity and affirm its value. For simple searches, a traditional search engine might still provide purer, more reliable information.
Dr. Lauri Goldkind is Professor of Social Work at the Graduate School of Social Service at Fordham University. She is Editor-in-Chief of the Journal of Technology in Human Services. Dr. Goldkind thinks and writes about improving the lives of those in human services through technology. She can be reached at goldkind@fordham.edu and found online at www.laurigoldkind.net.