Template

AI Acceptable Use Policy Template

A comprehensive policy template for governing employee use of AI tools. Customize this template to align with your organization's risk tolerance, industry, and regulatory requirements.

1

Purpose

This policy establishes guidelines and requirements for the acceptable use of Artificial Intelligence (AI) tools, platforms, and services by employees within [Organization Name]. The purpose of this policy is to:

  • Protect organizational data, including Controlled Unclassified Information (CUI), Personally Identifiable Information (PII), Protected Health Information (PHI), and other sensitive or proprietary information.

  • Ensure compliance with applicable laws, regulations, and contractual obligations related to data privacy and security.

  • Promote responsible and ethical use of AI technologies in business operations.

  • Establish clear boundaries for what constitutes acceptable and prohibited use of AI tools.

  • Reduce the risk of data leakage, intellectual property exposure, and reputational harm.

2

Scope

This policy applies to:

  • All employees, contractors, temporary workers, interns, and third-party personnel who access organizational systems or handle organizational data.

  • All AI tools, platforms, services, and integrations used for business purposes, whether provided by the organization or accessed independently.

  • All devices used to access AI tools, including organization-owned devices, personal devices used for work (BYOD), and mobile devices.

  • All data processed through AI tools, regardless of the data classification level.

3

Definitions

Artificial Intelligence (AI)

Any software system that uses machine learning, natural language processing, computer vision, or other techniques to generate content, analyze data, make predictions, or automate decisions.

Generative AI

AI systems that create new content such as text, images, code, audio, or video based on user prompts. Examples include ChatGPT, Microsoft Copilot, Claude, Gemini, and Midjourney.

Approved AI Platform

An AI tool or service that has been reviewed, risk-assessed, and formally approved by the organization for business use.

Shadow AI

The use of unapproved or unauthorized AI tools for business purposes without organizational knowledge or approval.

Prompt

The input text, data, or instructions provided to an AI system to generate a response or output.

4

Approved AI Platforms

The following AI platforms have been approved for business use. Employees must only use approved platforms for work-related activities.

PlatformApproved Use CasesData Classification Allowed
[Platform 1, e.g., Microsoft Copilot][e.g., Document drafting, email summarization, data analysis][e.g., Internal, Confidential with restrictions]
[Platform 2, e.g., ChatGPT Enterprise][e.g., Code assistance, research, content generation][e.g., Internal only]
[Platform 3, e.g., Organization-hosted LLM][e.g., All business functions][e.g., All classifications up to CUI]

Requests to add new AI platforms to the approved list must be submitted to [IT Security Team / Governance Committee] for review. The review process includes a vendor risk assessment, data processing agreement review, and security architecture evaluation.

5

Prohibited Uses

The following uses of AI tools are strictly prohibited:

  • Entering CUI, PII, PHI, financial account numbers, Social Security numbers, or other regulated data into any AI platform unless the platform has been specifically approved for that data classification.

  • Entering trade secrets, proprietary source code, patent-pending information, or confidential business strategies into external AI platforms.

  • Using AI tools to generate content that is discriminatory, harassing, defamatory, or otherwise violates organizational policies or applicable laws.

  • Using AI-generated content without human review and validation for any deliverable, decision, communication, or regulatory submission.

  • Presenting AI-generated work as original human-authored work in contexts where disclosure is required.

  • Using AI tools to circumvent security controls, access unauthorized data, or bypass organizational policies.

  • Installing or accessing unapproved AI browser extensions, plugins, or integrations on organization-managed devices.

  • Using AI tools to make automated decisions about hiring, termination, compensation, or other employment actions without human oversight.

  • Using AI tools to create deepfakes, synthetic media, or misleading content that could harm individuals or the organization.

6

Data Handling Requirements

When using approved AI tools, employees must follow these data handling requirements:

  • Review all prompts before submission to ensure they do not contain sensitive, regulated, or proprietary data beyond what the platform is approved to process.

  • Strip or anonymize sensitive data elements before including them in AI prompts whenever possible.

  • Do not upload files, documents, or datasets containing sensitive data to AI platforms unless the platform has been approved for that data classification and use case.

  • Review all AI-generated outputs for accuracy, bias, and appropriateness before using them in business operations.

  • Do not rely on AI-generated outputs as the sole source of truth for compliance, legal, financial, or safety-critical decisions.

  • Retain records of significant AI interactions that produce business decisions or deliverables in accordance with the organization's records retention policy.

  • Report any suspected data leakage through AI tools to the IT Security team immediately.

7

Employee Responsibilities

All employees are responsible for:

  • Completing required AI awareness training before using any AI tools for business purposes.

  • Understanding which AI platforms are approved and the data classification restrictions for each.

  • Exercising professional judgment when reviewing AI-generated content and not blindly accepting outputs.

  • Reporting any suspected policy violations, data incidents, or security concerns related to AI use.

  • Complying with all applicable laws, regulations, and organizational policies when using AI tools.

  • Refraining from using personal AI accounts or subscriptions for business activities.

  • Keeping current with policy updates and attending refresher training when required.

8

Incident Reporting

Employees must report the following AI-related incidents immediately:

  • Accidental submission of sensitive, regulated, or proprietary data to an unapproved AI platform.

  • Discovery of AI-generated content that contains biased, discriminatory, or inaccurate information used in business decisions.

  • Identification of colleagues using unapproved AI tools (Shadow AI) for business activities.

  • Receipt of AI-generated phishing emails, deepfake communications, or other AI-enabled social engineering attempts.

  • Any situation where AI-generated output has been used in a regulatory submission, legal document, or client deliverable without proper review.

Report incidents to [IT Security Team / Security Operations Center] via [reporting channel]. Do not attempt to remediate incidents independently.

9

Review and Updates

This policy will be reviewed and updated according to the following schedule:

  • Quarterly review by the [IT Security Team / Governance Committee] to assess policy effectiveness and update approved platform list.

  • Annual comprehensive review including updates to reflect new regulations, emerging AI capabilities, and lessons learned.

  • Ad-hoc updates triggered by significant AI-related incidents, new regulatory requirements, or major changes in approved AI platforms.

  • All policy updates will be communicated to employees and tracked in the policy version history.

Policy Owner: [Title / Department]

Last Reviewed: [Date]

Next Review: [Date]

Version: [X.X]

Need Help Implementing an AI Policy?

Our team can help you customize this policy, assess your current AI usage, and implement governance controls tailored to your organization.