“Don’t Panic”: Hitchhiker’s Guide to AI Policy
If you’ve read the Douglas Adams classic sci-fi comedy The Hitchhiker’s Guide to the Galaxy, you probably remember that the guide itself had the useful reminder “Don’t Panic” on the cover. You can take that advice to heart when formulating an artificial intelligence (AI) policy for your community benefit organization.
Don’t get me wrong—the need for an AI policy is urgent. Multiple studies find that 50% or more of employees are using unauthorized AI tools in their companies, with some suggesting the number is as high as 93%. And more than a third of those employees are sharing sensitive data with AI applications.
Some training may be warranted, to put it mildly.
The key question for leadership should be, “How do we clamp down on security and reputational risks without stifling innovation or making employees angry?” The answer: Take this opportunity to start a conversation about AI. Take these simple steps:
Survey the staff to see how they feel about AI and what they use now. The introduction should emphasize transparency and cooperation. (See suggested survey language below.)
Take the survey results and incorporate them into a “draft” policy. The drafting team should include human resources (HR), information technology (IT), at least one person from leadership, and front line workers. The workers should preferably have one AI believer and one AI skeptic.
Play back the survey results to the organization in a report and solicit comments on the draft policy. The report should point out places where you incorporated results into the policy. Where you purposefully didn’t incorporate employee feedback, explain why. The team needs to understand that this is a dialogue but that leadership will continue to be responsible for final decisions.
When you have gathered comments and incorporated changes where appropriate, train the staff on the policy and on basic uses of AI.
Your AI policy does not have to be complicated, but it should reflect your organization’s mission, vision and values where appropriate. Make sure you incorporate the key elements of a good policy that I have already covered. Contact me if you need some specific examples of other organizations’ AI policies.
If you complete this AI policy development process the right way, you may find that employees feel more like management is on their side in this technological revolution. Best practice is to see AI policy as an opportunity for discussion and training.
After all, we’re all new at this!
I recommend starting with NTEN’s sample AI policy, which is too long in my opinion but a great start. If you’re a member of the Society of Human Resource Management, they have a sample policy behind their paywall.
Here’s some more sample language for a basic AI policy
(With my caveat that you should also incorporate policies related to your core mission, vision and values):
Trust and Transparency: We disclose when we have used AI in meaningful ways. We do not use AI for decisions that could affect service recipients or employees. We review all uses of AI for accuracy. We do not store sensitive data in AI tools except where we have reviewed the vendor’s strict safety and security provisions.
Privacy and Data Security: We do not enter personally identifiable information into AI tools. All AI data storage adheres to our data retention policies and data sharing policies. We have added additional security measures specifically to combat AI phishing and fraud. Employees must not share with AI tools any data that is confidential, proprietary, or protected by regulation without prior approval. Employees must apply the same security best practices we use for all company and customer data.
Responsibility and Accountability: The Director of IT is responsible for vetting all AI tools for safety, security and reliability. We do not use tools that have not been vetted. Each individual in the organization is responsible for disclosing when they have used AI in a meaningful way to create content with the exception of editing, spelling, grammar, etc. The COO and Director of HR are responsible for ethical use of AI, including monitoring changes in AI tools that might affect these policies.
Bias and Fairness: Understanding that AI training data often contains implicit bias, we examine the output of AI tools to screen for bias. For this reason, we do not use AI tools to make decisions where the data could identify characteristics such as race, sex or country of origin.
Vendor Selection and AI Development: IT is responsible for ensuring that AI tools do not cause physical or digital harm. This includes ongoing monitoring of our AI vendors for bias, safety and ethics.
Adaptability and Continuous Learning: Understanding that AI evolves rapidly, we commit to ongoing training and adaptation of policies and frameworks as new ethical challenges and technological advancements emerge. We will re-evaluate AI policy every 6 months. IT will evaluate new AI tools for approval once every 2 months.
Legal and Regulatory Compliance: We comply with laws and regulations related to AI. In keeping with our organization’s values, we apply the highest ethical standards possible to AI use. These standards include avoiding AI-created original images where possible and labeling AI-created content where possible.
Here’s some language for an AI employee survey:
Introduction: With the increasing use of AI across all kinds of software, we’re working on developing a organizational AI policy. Before we start, we want to get your input. This survey will remain confidential. If you are already using unauthorized AI tools, this is your moment to come clean! We promise no retaliation or negative consequences.
Questions:
How familiar are you with artificial intelligence tools? (1=No knowledge, 7=Completely knowledgeable)
What specific AI tools do you use at work today (e.g., ChatGPT, Claude, Copilot)? Include AI tools that are already embedded in software such as Canva. (Open ended)
What kinds of tasks, if any, do you accomplish with AI tools?
What concerns do you have about using AI in our organization? (Open ended)
If you are excited or optimistic about AI, what do you anticipate will be the benefits?