Insights

How to Build a Shadow AI Policy That People Will Actually Follow

AI GovernanceShadow AIPolicy

Shadow AI is not a future problem. It is a present-day operating condition for most organizations. Employees use personal ChatGPT accounts, AI writing assistants, browser extensions with AI features, and productivity tools with AI capabilities built in -- often without any formal approval, review, or awareness from leadership.

The response to this is typically a policy. The problem is that most shadow AI policies are written to satisfy a compliance checkbox rather than to change behavior. A policy that employees do not read, cannot follow in practice, or do not believe reflects real expectations is not a control. It is documentation of an intention no one acted on.

This guide covers what a practical shadow AI policy should include and how to make it work.

Start with Inventory, Not Rules

The most common mistake in shadow AI policy development is writing rules before understanding what is actually in use. A policy that prohibits tools employees have been using for six months -- and rely on for daily work -- will either be ignored or will generate immediate pushback that undermines the entire effort.

Before drafting any policy language, conduct an AI tool inventory. Ask teams what tools they use. Review browser extensions, installed software, and SaaS integrations. Check which productivity tools have AI features that may have been enabled by default. The inventory will show you what you are actually governing, which is usually different from what leadership assumed.

Define the Data Categories That Matter

The core question in any shadow AI policy is not which tools are allowed. It is which data can flow through which tools. Two employees using ChatGPT are in very different situations depending on what they are sharing with it.

A practical policy defines data categories and establishes clear rules for each:

  • Public or general business information -- information that could appear on your website, in marketing materials, or in public filings. Lower risk for most AI tool use.
  • Internal operational information -- internal processes, meeting notes, project details not intended for external sharing. Review required before using AI tools.
  • Confidential business information -- financial data, contracts, strategic plans, M&A activity, competitive information. Should not be entered into unapproved AI tools.
  • Client or customer data -- any information about clients, their systems, or their business. Generally prohibited from use in AI tools without explicit client consent and a reviewed data processing agreement.
  • Regulated data -- health information, financial records subject to regulation, government contract data. Subject to specific legal requirements that AI tool use may violate.

Most employees do not think through data categories before using an AI tool. A policy that makes these categories explicit and maps them to clear rules gives people something they can actually apply in the moment.

The Approval Process Has to Be Lightweight

If the process for getting an AI tool approved takes two weeks and involves a committee, employees will bypass it. Not because they are careless, but because work has deadlines and informal workarounds feel lower-friction than a bureaucratic process.

A practical approval process for SMBs typically looks like this: a designated owner (often a CISO, COO, or IT lead) reviews new tool requests using a short checklist. The checklist covers data processing terms, vendor security posture, the use case, and which data categories will be involved. Decisions should be turnable in a few business days for standard productivity tools.

The policy should make the approval path obvious. Employees who want to use a new tool need to know who to ask and what information to provide.

Approved, Restricted, and Prohibited

Rather than a binary allowed/not-allowed structure, most organizations benefit from three categories:

Approved tools are reviewed, documented, and can be used within the data handling guidelines.

Restricted tools are tools where use is permitted for specific purposes or data categories only. An employee might use an approved AI writing tool for internal drafts but not for processing client information.

Prohibited tools are tools that will not be approved regardless of use case, typically due to vendor terms of service, data residency requirements, or risk profile. Personal accounts on consumer AI platforms often fall here for professional use involving confidential data.

Publishing a maintained list of approved and prohibited tools -- and keeping it current -- is more effective than a general policy statement about responsible use.

What the Policy Needs to Cover

A complete shadow AI policy for an SMB typically includes:

  • Scope: which employees, systems, and use cases the policy covers
  • Data classification summary: what categories of data exist and how they are defined
  • Approved tool list: current as of the policy date, with a process for requesting additions
  • Data handling rules: which categories can be used with which tool types
  • Approval process: who owns it, what the process requires, and expected turnaround
  • Prohibited uses: specific scenarios that are not permitted regardless of tool status
  • Review and output standards: expectations for reviewing AI-generated content before using it
  • Accountability: who is responsible for violations and what the reporting process is

Make It a Living Document

A shadow AI policy written in early 2024 is already outdated. The tool landscape changes quickly. New AI capabilities are added to existing products. Vendors update their data processing terms. New categories of AI risk emerge.

The policy needs an owner, a review cycle (at least quarterly), and a process for issuing updates when significant changes occur. A policy that reflects last year's tool landscape is a compliance artifact, not an operational control.

Getting Started

For organizations that do not have a shadow AI policy in place, the right first step is the inventory. Understand what is actually in use before writing rules. From there, governance is a matter of making the existing informal decisions explicit and documented.

NightFortress delivers AI governance engagements for SMBs and mid-market organizations in Northern Virginia and the DC metro area, including shadow AI assessment and policy framework development. Contact us or learn more about our AI governance services.