AI governance policies fail not because they are badly written but because nobody follows them. Only 25% of organisations have fully put AI governance programmes into practice. 43% have a written policy, 25% are still drafting one, and 29% have nothing at all. The mid-market gets hit hardest: large enough to face real regulatory exposure under UK GDPR and the ICO's AI guidance, but too lean to hire a dedicated AI ethics team. The organisations getting this right are not the ones with the thickest policy documents. They are the ones that built governance into the tools people already use, trained staff in language they understood, and made compliance easier than non-compliance. That is not a philosophical position. It is a design principle.
Why Written Policies Gather Dust
Board Oversight of AI Risk Has Tripled
Fortune 100 companies citing AI risk in board oversight responsibilities, showing rapid increase from 2024 to 2025
Source: EY Center for Board Matters / Corporate Compliance Insights 2025
A compliance team writes a 40-page AI acceptable use policy. Legal reviews it. The board signs it off. HR emails it to all staff with a read-and-acknowledge link. Six months later, 60% of employees are using ChatGPT for client work without checking whether the policy allows it. This is not hypothetical. Research shows 97% of AI-related breach victims lacked proper access controls — the policy existed, the enforcement did not. The problem is structural. Written policies assume people will read them, remember them, and apply them in the moment they are about to paste client data into a prompt. That assumption is wrong for the same reason 'do not reply all' policies have never stopped anyone replying all.
The Numbers That Should Worry Mid-Market IT Directors
The AI Governance Gap in UK Organisations
Percentage of organisations at each stage of AI governance maturity, showing the sharp drop between having a policy and enforcing it
Source: AI Data Analytics Network / Knostic AI Governance Statistics 2025
The gap between policy and practice is measurable. 43% of businesses have an AI governance policy, but only 14% enforce it at the enterprise level. Board oversight of AI has tripled since 2024 — from 16% to 48% of Fortune 100 companies citing AI risk in board responsibilities — but that energy has not reached the mid-market in equal measure. 98% of UK respondents in a global AI risk survey reported financial losses from unmanaged AI risks, averaging US$3.9 million per organisation. Firms with formal AI oversight structures report 35% more revenue growth and 40% better cost control than those without. The business case is not abstract. Governance correlates with performance because it forces clarity about what AI is being used for, by whom, and with what controls.
What the ICO and DSIT Actually Expect
The UK does not have a single AI law. The government's approach, set out by DSIT, is principles-based and sector-led. Five principles apply across all sectors: safety, transparency, fairness, accountability, and contestability. Existing regulators — the ICO for data, the FCA for finance, the CMA for competition — enforce AI rules within their own remits. The ICO's AI and data protection toolkit is the single best resource for mid-market firms. It provides a risk assessment tool for auditing AI compliance, DPIA guidance for AI under UK GDPR, and specific advice on automated decision-making under Article 22. The Data Use and Access Act, which received Royal Assent in June 2025, is prompting the ICO to review and update its AI guidance. The AI Opportunities Action Plan, published January 2026, confirmed 38 of 50 commitments have been met, with the government establishing an AI Growth Lab for regulatory sandbox work. None of this requires you to wait. The principles are clear. The enforcement mechanisms exist. What the regulators want to see is documented, proportionate effort — not perfection.
Building Governance Into the Tools People Already Use
The mid-market firms getting governance right are not distributing PDFs. They are configuring the platforms staff already work in. Microsoft Purview now provides Data Security Posture Management for AI, extending sensitivity labels and data loss prevention policies to Copilot and third-party AI tools. You can set policies that prevent Copilot from processing documents labelled Confidential, block sensitive data types in AI prompts, and audit every AI interaction against your classification scheme. For organisations not on Microsoft 365, OneTrust offers a dedicated AI governance platform that automates model and use-case risk assessments aligned with NIST AI RMF and ISO 42001. It inventories every AI tool in use, maps data flows, and flags risk before it becomes a breach. OneTrust was recognised in the 2025 Gartner Market Report for AI Governance Platforms, though its pricing can be steep for the lower end of mid-market.
The Staff Training Problem Nobody Talks About
88% of organisations rely on on-the-job training for AI rather than structured programmes. That means your staff are learning AI governance from whoever sits next to them, which might be the person who pastes client names into ChatGPT every Thursday. Effective training for mid-market firms does three things. First, it teaches in context — showing staff what the policy means for their specific role, not reciting principles at them. A finance analyst needs to know they cannot paste management accounts into a public AI tool. A recruiter needs to know that using AI to screen CVs triggers Article 22 obligations. Second, it repeats — quarterly refreshers tied to real incidents work better than annual compliance tick-boxes. Third, it measures — you track completion, test comprehension, and follow up on teams that score poorly. The apprenticeship route for AI skills has grown from 3% of AI hires in 2020 to 19% in 2025, showing that structured pathways produce better outcomes than hoping people will figure it out.
A Framework Comparison for Mid-Market Firms
Three frameworks dominate the AI governance conversation. ISO 42001 is the international standard for AI management systems. It is rigorous, certifiable, and expensive to adopt — best suited to firms where certification is a commercial requirement, such as those selling AI products or bidding for public sector contracts. The NIST AI Risk Management Framework is free, flexible, and widely referenced. It maps governance across four functions — Govern, Map, Measure, Manage — and works well as a backbone for mid-market policy. It does not require certification. The ICO AI and Data Protection Toolkit is UK-specific, free, and directly aligned with UK GDPR enforcement priorities. For a mid-market UK firm, the pragmatic approach is to use the ICO toolkit as your compliance baseline, adopt NIST AI RMF as your operational framework, and pursue ISO 42001 only if clients or contracts demand certification.
The 90-Day Governance Rollout
Week one to two: audit every AI tool currently in use across the organisation. This means shadow AI as well — the tools staff adopted without IT approval. Survey department heads, check expense claims for AI subscriptions, and review browser extensions. Week three to four: draft your AI acceptable use policy. Keep it under ten pages. Define what AI tools are approved, what data can and cannot be processed, who is accountable for each use case, and what happens when someone breaks the rules. Week five to six: configure your platform controls. In Microsoft 365, this means Purview sensitivity labels and DLP policies for Copilot. For other platforms, it means access controls and audit logging. Week seven to eight: deliver role-specific training. Not a company-wide webinar — department-level sessions that show each team exactly what the policy means for their daily work. Week nine to ten: send the first round of compliance reviews to department heads. Ask them to verify which AI tools their teams use and confirm that approved tools are configured correctly. Week eleven to twelve: document your governance framework formally, complete your UK GDPR DPIA for AI processing, and present the board summary. This is a three-month project for a compliance lead and an IT director working together. It does not require external consultants unless you want ISO 42001 certification.
What Good Looks Like After Six Months
Six months after rollout, a well-governed mid-market firm looks like this. Every AI tool in use is registered in a central inventory. Sensitivity labels prevent confidential data from reaching AI tools without authorisation. Staff have completed role-specific training and can describe what they are and are not allowed to do. Department heads run quarterly reviews of AI usage in their teams. The board receives a one-page AI risk summary each quarter. Incidents — and there will be incidents — are logged, investigated, and used to update the policy. The DPIA is reviewed annually or when AI usage changes materially. This is not bureaucracy for its own sake. Firms with formal AI oversight report 35% more revenue growth than those without. Governance is the thing that lets you adopt AI faster, because it removes the ambiguity that makes cautious managers say no to everything.
What Governance Cannot Fix
Where Mid-Market Governance Effort Should Go
The 70/20/10 split between tooling, training, and written policy that characterises effective AI governance programmes
Source: CTC analysis based on breach and adoption data 2025
No policy stops a determined employee from copying AI output into a personal email. No sensitivity label prevents someone with legitimate access from sharing a summary they should not have shared. Governance reduces risk, it does not eliminate it. The 97% of AI breach victims who lacked access controls did not have a governance failure — they had an enforcement failure. The distinction matters because it tells you where to invest. Spend 70% of your effort on tooling and controls that make the wrong thing hard to do. Spend 20% on training that makes the right thing easy to understand. Spend 10% on the written policy that documents what you have already built. That ratio — 70/20/10 — is the inverse of what nearly every organisation does, which is why their governance programmes fail.

