71% of UK employees have used unapproved AI tools at work. 22% of them used those tools for finance tasks. If your business does not have a written AI acceptable use policy, your team is already making decisions about customer data, intellectual property and regulatory compliance on your behalf — and you have zero visibility of what they are putting into ChatGPT, Copilot or Gemini. This guide gives you a practical, plain-English policy you can put in front of your team by Monday morning.
Why Your Business Needs an AI Policy Right Now
Shadow AI Usage Among UK Employees (October 2025)
What UK employees are using unapproved AI tools for at work, based on Microsoft research.
Source: Microsoft UK Shadow AI Report, October 2025
Microsoft research from October 2025 found that 71% of UK employees use consumer AI tools at work without approval. 49% use them to draft emails. 40% use them for reports and presentations. 28% said their company simply does not provide a sanctioned option, so they use whatever is to hand.
The ICO does not distinguish between a 5,000-person bank and a 12-person accountancy practice. If a member of your team pastes client data into a free ChatGPT account, you are the data controller and you carry the liability. The UK GDPR applies regardless of company size.
The Data (Use and Access) Act 2025 also tightens rules around automated decision-making. If AI output feeds into decisions that affect people — recruitment screening, credit checks, insurance pricing — your business needs documented safeguards. A written policy is the starting point.
What the ICO Expects from Small Businesses
The ICO published its own internal AI use policy in 2025. That document covers approved tools, prohibited uses, data classification and staff responsibilities. You do not need to copy it word for word, but the ICO expects every organisation using AI to have something similar.
The five core principles from the UK Government's AI White Paper apply here too: safety, transparency, fairness, accountability and contestability. Your policy should address each one in terms your team can act on.
The CIPD's guidance for UK employers adds a sixth dimension: upskilling. A policy that says "don't use AI" will be ignored. A policy that says "here are the approved tools, here is how to use them properly, and here is what never goes into a prompt" stands a chance of being followed.
The Eight Sections Every AI Policy Needs
Eight Essential Sections of an AI Acceptable Use Policy
Coverage areas that every UK small business AI policy should address, based on ICO and CIPD guidance.
Source: ICO AI Guidance / CIPD Workplace AI Guidance
After reviewing ICO guidance, CIPD recommendations, and templates from UK legal practices, these are the sections that cover the ground a small business needs:
1. Purpose and scope. State what the policy covers, who it applies to, and which AI tools are in scope. Name them — ChatGPT, Microsoft Copilot, Google Gemini, Claude, Midjourney, whatever your team is likely to encounter. Generic policies get ignored.
2. Approved tools list. Specify which tools are sanctioned for work use. State the subscription tier required. A free ChatGPT account has different data retention rules from a ChatGPT Team or Enterprise plan. A Microsoft 365 Copilot licence processes data within your tenant. These distinctions matter.
3. Prohibited uses. Be specific. No pasting of customer personal data into free-tier tools. No uploading of financial records. No using AI output in legal or regulatory submissions without human review. No generating content that impersonates real people.
4. Data classification rules. Define three categories at minimum: public (press releases, published content), internal (meeting notes, project plans), and restricted (customer PII, financial data, HR records). Map each category to what can and cannot go into an AI prompt.
5. Human review requirements. State where human sign-off is mandatory before AI-generated content goes to a client, a regulator or the public. The Data (Use and Access) Act 2025 requires "meaningful human involvement" in automated decisions that produce legal or material effects.
6. Intellectual property. Who owns AI-generated output? If your team uses Copilot to draft a proposal, does the client know? If you train a model on client data, what are the licensing terms? Address ownership, disclosure and attribution.
7. Training and awareness. Set a minimum standard. Every team member should know where to find the policy, understand the data classification rules, and complete a short briefing before using any AI tool for work. The CIPD recommends regular refresher sessions, not just a one-off induction.
8. Review and update schedule. AI tools change quarterly. Set a review date — every six months at minimum. Assign an owner. The ICO expects policies to reflect current practice, not last year's assumptions.
Free Template for a Ten-Person UK Business
Here is a starter template you can adapt. It is written for a professional services firm with 10 to 30 staff, but the structure works for any small business. Replace the bracketed sections with your own details.
[Company Name] AI Acceptable Use Policy — Version 1.0 — [Date]
Scope: This policy applies to all employees, contractors and freelancers who use AI tools in connection with [Company Name] business. It covers generative AI tools including ChatGPT, Microsoft Copilot, Google Gemini, Claude and any similar service.
Approved tools: [Company Name] approves the following AI tools for business use: [List tools and subscription tiers, e.g. Microsoft 365 Copilot (included in M365 licence), ChatGPT Team (company account only)]. Any tool not on this list requires written approval from [role/person] before first use.
Prohibited uses: Staff must not enter customer personal data, financial records, HR information, or any data classified as Restricted into any AI tool. Staff must not use AI to generate legal advice, regulatory submissions, or communications to regulators without senior review. Staff must not use AI to create deepfakes, impersonate individuals, or produce misleading content.
Data classification: Public — safe for any AI tool. Internal — approved tools only, no free-tier services. Restricted — never entered into any AI tool under any circumstances.
Human review: All AI-generated content sent to clients, regulators or published externally must be reviewed and approved by a named team member before release.
Intellectual property: AI-generated content created using [Company Name] tools and data is owned by [Company Name]. Staff must disclose to clients when AI has contributed to a deliverable, unless the output is purely internal.
Training: All staff must complete the AI awareness briefing within 30 days of this policy's effective date. Annual refresher training is mandatory.
Review: This policy will be reviewed every six months by [role/person]. The next review date is [date].
Breach: Failure to comply may result in disciplinary action in line with [Company Name]'s existing disciplinary procedure.
The Ten-Point Setup Checklist
Monthly Cost per User for Business-Grade AI Tools (UK Pricing)
Comparison of monthly per-user costs for approved AI tools with proper data handling.
Source: Vendor pricing pages, February 2026
Writing the policy is the first step. Getting your team to follow it requires a plan. Here is a practical checklist drawn from ICO and CIPD guidance:
1. Audit current AI use. Ask your team what they are already using. Microsoft's research shows 71% are using tools you probably do not know about. Start with an honest conversation, not a blame session.
2. Choose your approved tools. Pick two or three. Fewer is better. Make sure each one has a business-grade subscription with proper data handling terms.
3. Set up the subscriptions. ChatGPT Team costs $25 per user per month. Microsoft 365 Copilot costs £24.70 per user per month with an annual commitment. Google Gemini for Workspace starts at around £11.50 per user per month. Budget for the tools your team actually needs.
4. Write the policy. Use the template above or adapt the ICO's own internal policy as a starting point. Keep it under three pages.
5. Get legal sign-off. If you have a solicitor or compliance officer, have them check it against your existing data protection obligations. If you do not, the ICO's AI toolkit provides a free self-assessment.
6. Brief the team. Run a 30-minute session. Cover the three data categories, the approved tools list, and the prohibited uses. Make it practical — show real examples of what a good prompt looks like and what a dangerous one looks like.
7. Pin it somewhere visible. Company intranet, shared drive, Notion page, printed on the wall. If people cannot find the policy, it does not exist.
8. Set up reporting. Create a simple way for staff to flag when they are unsure. An email alias, a Slack channel, a shared form. The goal is questions before mistakes, not incident reports after.
9. Log AI tool usage. If you use Microsoft 365 Copilot, the admin centre tracks usage. For other tools, ask team leads to report monthly. You need visibility, not surveillance.
10. Schedule the first review. Put a date in the diary for six months' time. AI tools change fast. Your policy needs to keep up.
What the Data (Use and Access) Act 2025 Changes
The DUAA came into force on 19 June 2025, with provisions phased through to June 2026. For small businesses, the key changes are:
The Act narrows the restriction on automated decision-making. Solely automated decisions now require safeguards only when they produce legal or similarly material effects on individuals. The rules are strictest when special category data is involved — health records, ethnic origin, political beliefs, biometric data.
For the typical small business, this means: if you use AI to filter job applications, score customer credit, or triage support tickets in a way that affects outcomes for real people, you need documented human oversight. Your policy should specify who reviews those decisions and how.
The ICO plans to publish new guidance on automated decision-making by Spring 2026. Your policy should note this and commit to incorporating the updated guidance when it arrives.
Five Questions to Ask Before You Start
Before you sit down to write, work through these:
What AI tools is my team already using? If you do not know, ask. You will be surprised.
What data does my business handle that should never go into an AI prompt? Client PII, financial records, HR data, legal correspondence. Write the list.
Do our current employment contracts mention AI? If not, you may need to issue an addendum. The CIPD recommends updating contracts alongside the policy.
Who owns AI-generated work product? Your contracts with clients may need updating if AI is contributing to deliverables.
What happens when someone breaks the rules? Define the consequences in advance. A policy without enforcement is a suggestion.
Backlinks
If you are already using Microsoft 365 Copilot and want to understand what the NCSC actually says about putting sensitive data through it, read our earlier piece: What the NCSC Actually Says About Using ChatGPT and Copilot with Sensitive Business Data. And if you are setting up Copilot for the first time, Getting Microsoft 365 Copilot Working for a 20-Person UK Company walks through the practical steps.

