Two thirds of UK IT decision-makers have either banned or considered banning ChatGPT on work devices. The NCSC has published detailed guidance on LLMs and sensitive data. Yet nearly every boardroom conversation about AI and data security bear almost no resemblance to what the NCSC actually wrote. The result is a split: companies that banned everything and lost productivity, and companies that banned nothing and lost control of their data.
The Six Myths — and What the NCSC Actually Said
The NCSC's position on large language models is publicly available. It is not buried in a classified annex. It is a blog post, a set of guidelines for secure AI system development, and supporting material on their website. The problem is not that the guidance is hidden. The problem is that almost nobody reads it before making policy.
Myth One — ChatGPT Trains on Everything You Type
This was true of the free consumer product at launch. It is not true of ChatGPT Enterprise, ChatGPT Business (formerly ChatGPT Team), or the OpenAI API. OpenAI's published policy states that business-tier products are opted out of model training by default. No prompts, no outputs, no data accessed through the platform is used to improve models unless the organisation explicitly opts in.
The NCSC's actual warning was narrower and more precise: queries submitted to public LLM services are visible to the service provider and may be used to develop the service. The provider could also be acquired by an organisation with a different privacy stance, or suffer a data breach. That is a real risk. But it applies to the free consumer tier, not to the enterprise products that now exist.
The practical gap: a blanket ban treats a £20 per user per month ChatGPT Business subscription and a free personal ChatGPT account as the same thing. They are not.
Myth Two — Microsoft Copilot Is Automatically Safe Because It Is Microsoft
Microsoft 365 Copilot does keep your data inside your tenant boundary. Prompts, grounding data from Microsoft Graph, and responses are processed within the Microsoft 365 service boundary, logically isolated per tenant. Microsoft does not use this data to train foundation models.
But "inside your tenant" is not the same as "safe." If your SharePoint permissions are a mess — and in the average SMB, they are — then Copilot will surface documents that individual users were never meant to see. It does not add security. It exposes the security you already have, or do not have.
There is a second wrinkle. Microsoft added Anthropic's Claude models as an option within Copilot in 2025. Those Claude models are explicitly excluded from the EU Data Boundary and UK in-country processing commitments. If your organisation is in the UK, Anthropic models are disabled by default, but if someone toggled them on, your data processing geography just changed without a policy update.
Myth Three — Banning AI Tools Keeps Your Data Safe
66 per cent of UK IT decision-makers reported banning or considering banning ChatGPT on employee devices. 70 per cent cited data protection as the reason.
The NCSC did not recommend blanket bans. What the NCSC said was that organisations should ensure staff who want to experiment with LLMs are able to do so in a way that does not put organisational data at risk. That is not a ban. That is a controlled environment with clear rules about what data goes in.
A ban without an alternative just pushes usage underground. Staff use personal devices, personal accounts, and the free consumer tier — the one that actually does use data for training by default. The ban achieves the opposite of what it intended.
Myth Four — Data Residency Solves the GDPR Problem
OpenAI launched UK data residency for ChatGPT Enterprise, ChatGPT Edu, and the API platform in October 2025. Europe got data residency in February 2025. Microsoft 365 Copilot stores data at rest in your tenant's region — UK South and UK West for UK-provisioned tenants.
Data residency is a necessary condition for UK GDPR compliance. It is not a sufficient one. You still need a lawful basis for processing, a data processing agreement, a record of processing activities, and — if you are feeding personal data into prompts — a legitimate interest assessment or explicit consent. Residency tells you where the bytes sit. It does not tell you whether you have the right to put them there.
The NCSC's guidance on sensitive data is blunt: classify your data first, then decide what can go into which tool. The majority do it the other way round.
Myth Five — Prompt Injection Is a Fixable Bug
In December 2025, the NCSC published a blog post titled "Prompt injection is not SQL injection" that made one point very clearly: prompt injection may never be fully mitigated. SQL injection became manageable because developers could draw a firm line between commands and untrusted data. With LLMs, that line does not exist inside the model. Every token is treated as a potential instruction.
The NCSC's recommendation was not to wait for a fix. It was to design systems that assume the model is, in their word, "inherently confusable" and to manage the risk through architecture — limiting what the model can access, restricting what actions it can trigger, and building audit trails.
For an SMB running Copilot, this means that any workflow where an LLM reads untrusted content (incoming emails, uploaded documents, web pages) and then takes an action (drafting a reply, moving a file, updating a record) has a prompt injection surface. The answer is not to stop using the tool. The answer is to keep a human in the loop for anything that matters.
Myth Six — The NCSC Said AI Is Too Risky for Small Businesses
The NCSC runs an entire small business guide that includes advice on using AI tools safely. Their position is not that AI is too risky. Their position is that the risk is manageable if you do the groundwork: classify your data, set clear policies, train your staff, and choose business-tier products with contractual data protections.
The NCSC also published guidelines for secure AI system development, co-signed by agencies from 18 countries. Those guidelines are aimed at organisations building AI, not just using it. But the core principle applies to everyone: understand what your data is doing at every stage.
How the Products Actually Compare on Data Handling
| Feature | ChatGPT Free/Plus | ChatGPT Business | ChatGPT Enterprise | Microsoft 365 Copilot | --- | --- | --- | --- | --- | Data used for training | Yes (opt-out available) | No (by default) | No (by default) | No | UK data residency | No | No | Yes (from Oct 2025) | Yes (UK South/West) | Data stays in tenant | No | No | Configurable | Yes | SOC 2 certified | No | Yes | Yes | Yes | DPA available | No | Yes | Yes | Yes | Zero retention option | No | No | Yes (API) | N/A | GDPR compliant tier | No | Partial | Yes | Yes |
Eight-Point Checklist for Getting Your AI Data Policy Right
1. Classify your data before you choose a tool — the NCSC says this is the starting point, not the afterthought. 2. Ban the free consumer tier on work devices, not AI itself — give staff a business-tier alternative. 3. Audit your SharePoint and OneDrive permissions before deploying Copilot — it will surface everything your permissions allow. 4. Check whether Anthropic models are enabled in your Copilot tenant — if you need UK-only processing, they should be off. 5. Require a data processing agreement for any AI tool that handles personal data — ChatGPT Business and Enterprise both offer one. 6. Do not feed personal data into prompts without a lawful basis — data residency alone does not satisfy UK GDPR. 7. Keep a human in the loop for any AI workflow that reads untrusted content and then takes an action. 8. Review the NCSC's published guidance yourself — it is free, it is public, and it is shorter than the commentary written about it.


