What the NCSC Actually Says About Using ChatGPT and Copilot With Sensitive Business Data

7 min read

The NCSC published clear guidance on large language models and sensitive data, but the majority of UK businesses have either over-reacted with blanket bans or under-reacted by ignoring the risks entirely. This article walks through what the NCSC actually said — including that prompt injection may never be fully fixed, that queries are visible to service providers, and that data classification is the real starting point — and debunks six myths that keep circulating. It compares ChatGPT Enterprise, ChatGPT Business, and Microsoft 365 Copilot on data residency, training opt-outs, and UK data processing, and provides a practical eight-point checklist for getting your AI data policy right.

Photo of Kate Bennett
Written by Kate Bennett CEO of Disruptive LIVE

Two thirds of UK IT decision-makers have either banned or considered banning ChatGPT on work devices. The NCSC has published detailed guidance on LLMs and sensitive data. Yet nearly every boardroom conversation about AI and data security bear almost no resemblance to what the NCSC actually wrote. The result is a split: companies that banned everything and lost productivity, and companies that banned nothing and lost control of their data.

The Six Myths — and What the NCSC Actually Said

The NCSC's position on large language models is publicly available. It is not buried in a classified annex. It is a blog post, a set of guidelines for secure AI system development, and supporting material on their website. The problem is not that the guidance is hidden. The problem is that almost nobody reads it before making policy.

Myth One — ChatGPT Trains on Everything You Type

This was true of the free consumer product at launch. It is not true of ChatGPT Enterprise, ChatGPT Business (formerly ChatGPT Team), or the OpenAI API. OpenAI's published policy states that business-tier products are opted out of model training by default. No prompts, no outputs, no data accessed through the platform is used to improve models unless the organisation explicitly opts in.

The NCSC's actual warning was narrower and more precise: queries submitted to public LLM services are visible to the service provider and may be used to develop the service. The provider could also be acquired by an organisation with a different privacy stance, or suffer a data breach. That is a real risk. But it applies to the free consumer tier, not to the enterprise products that now exist.

The practical gap: a blanket ban treats a £20 per user per month ChatGPT Business subscription and a free personal ChatGPT account as the same thing. They are not.

Myth Two — Microsoft Copilot Is Automatically Safe Because It Is Microsoft

Microsoft 365 Copilot does keep your data inside your tenant boundary. Prompts, grounding data from Microsoft Graph, and responses are processed within the Microsoft 365 service boundary, logically isolated per tenant. Microsoft does not use this data to train foundation models.

But "inside your tenant" is not the same as "safe." If your SharePoint permissions are a mess — and in the average SMB, they are — then Copilot will surface documents that individual users were never meant to see. It does not add security. It exposes the security you already have, or do not have.

There is a second wrinkle. Microsoft added Anthropic's Claude models as an option within Copilot in 2025. Those Claude models are explicitly excluded from the EU Data Boundary and UK in-country processing commitments. If your organisation is in the UK, Anthropic models are disabled by default, but if someone toggled them on, your data processing geography just changed without a policy update.

Myth Three — Banning AI Tools Keeps Your Data Safe

66 per cent of UK IT decision-makers reported banning or considering banning ChatGPT on employee devices. 70 per cent cited data protection as the reason.

The NCSC did not recommend blanket bans. What the NCSC said was that organisations should ensure staff who want to experiment with LLMs are able to do so in a way that does not put organisational data at risk. That is not a ban. That is a controlled environment with clear rules about what data goes in.

A ban without an alternative just pushes usage underground. Staff use personal devices, personal accounts, and the free consumer tier — the one that actually does use data for training by default. The ban achieves the opposite of what it intended.

Myth Four — Data Residency Solves the GDPR Problem

OpenAI launched UK data residency for ChatGPT Enterprise, ChatGPT Edu, and the API platform in October 2025. Europe got data residency in February 2025. Microsoft 365 Copilot stores data at rest in your tenant's region — UK South and UK West for UK-provisioned tenants.

Data residency is a necessary condition for UK GDPR compliance. It is not a sufficient one. You still need a lawful basis for processing, a data processing agreement, a record of processing activities, and — if you are feeding personal data into prompts — a legitimate interest assessment or explicit consent. Residency tells you where the bytes sit. It does not tell you whether you have the right to put them there.

The NCSC's guidance on sensitive data is blunt: classify your data first, then decide what can go into which tool. The majority do it the other way round.

Myth Five — Prompt Injection Is a Fixable Bug

In December 2025, the NCSC published a blog post titled "Prompt injection is not SQL injection" that made one point very clearly: prompt injection may never be fully mitigated. SQL injection became manageable because developers could draw a firm line between commands and untrusted data. With LLMs, that line does not exist inside the model. Every token is treated as a potential instruction.

The NCSC's recommendation was not to wait for a fix. It was to design systems that assume the model is, in their word, "inherently confusable" and to manage the risk through architecture — limiting what the model can access, restricting what actions it can trigger, and building audit trails.

For an SMB running Copilot, this means that any workflow where an LLM reads untrusted content (incoming emails, uploaded documents, web pages) and then takes an action (drafting a reply, moving a file, updating a record) has a prompt injection surface. The answer is not to stop using the tool. The answer is to keep a human in the loop for anything that matters.

Myth Six — The NCSC Said AI Is Too Risky for Small Businesses

The NCSC runs an entire small business guide that includes advice on using AI tools safely. Their position is not that AI is too risky. Their position is that the risk is manageable if you do the groundwork: classify your data, set clear policies, train your staff, and choose business-tier products with contractual data protections.

The NCSC also published guidelines for secure AI system development, co-signed by agencies from 18 countries. Those guidelines are aimed at organisations building AI, not just using it. But the core principle applies to everyone: understand what your data is doing at every stage.

How the Products Actually Compare on Data Handling

FeatureChatGPT Free/PlusChatGPT BusinessChatGPT EnterpriseMicrosoft 365 Copilot---------------Data used for trainingYes (opt-out available)No (by default)No (by default)NoUK data residencyNoNoYes (from Oct 2025)Yes (UK South/West)Data stays in tenantNoNoConfigurableYesSOC 2 certifiedNoYesYesYesDPA availableNoYesYesYesZero retention optionNoNoYes (API)N/AGDPR compliant tierNoPartialYesYes

Eight-Point Checklist for Getting Your AI Data Policy Right

1. Classify your data before you choose a tool — the NCSC says this is the starting point, not the afterthought. 2. Ban the free consumer tier on work devices, not AI itself — give staff a business-tier alternative. 3. Audit your SharePoint and OneDrive permissions before deploying Copilot — it will surface everything your permissions allow. 4. Check whether Anthropic models are enabled in your Copilot tenant — if you need UK-only processing, they should be off. 5. Require a data processing agreement for any AI tool that handles personal data — ChatGPT Business and Enterprise both offer one. 6. Do not feed personal data into prompts without a lawful basis — data residency alone does not satisfy UK GDPR. 7. Keep a human in the loop for any AI workflow that reads untrusted content and then takes an action. 8. Review the NCSC's published guidance yourself — it is free, it is public, and it is shorter than the commentary written about it.

Frequently Asked Questions

Does ChatGPT use my business data to train its models?

If you are on ChatGPT Enterprise, ChatGPT Business, or the API, no. OpenAI's published policy states that these tiers are opted out of model training by default. The free and Plus tiers do use data for training by default, but individual users can opt out in their settings. The NCSC's warning about query visibility applies to the consumer tier, not the business products.

Is Microsoft 365 Copilot safe for sensitive UK business data?

Copilot keeps data within your Microsoft 365 tenant boundary and does not use it for model training. For UK-provisioned tenants, data at rest stays in UK South and UK West data centres. The risk is not Copilot itself but your existing SharePoint and OneDrive permissions — Copilot will surface any document your permissions allow, including ones users were never meant to find.

Did the NCSC recommend banning ChatGPT?

No. The NCSC said organisations should ensure staff who want to experiment with LLMs can do so without putting organisational data at risk. That is a recommendation for controlled access and clear data policies, not a ban. A blanket ban pushes usage to personal devices and the free tier, which has weaker data protections.

What is prompt injection and why does the NCSC say it cannot be fully fixed?

Prompt injection is an attack where untrusted content — an email, a document, a web page — contains instructions that the AI model follows as if they came from the user. The NCSC explained in December 2025 that unlike SQL injection, there is no firm boundary between commands and data inside an LLM. Every token can be interpreted as an instruction. The recommended approach is to design systems that assume the model is confusable and keep humans in the loop for actions that matter.

Does UK data residency make ChatGPT Enterprise GDPR compliant?

Data residency is necessary but not sufficient. You still need a lawful basis for processing, a data processing agreement, a record of processing activities, and — if personal data goes into prompts — a legitimate interest assessment or consent. Residency tells you where the data sits. It does not tell you whether you have the right to process it.

Are Anthropic Claude models in Microsoft Copilot subject to UK data processing rules?

Not currently. Microsoft has stated that Anthropic models in Copilot are excluded from the EU Data Boundary and UK in-country processing commitments. For UK tenants, these models are disabled by default. If your organisation requires UK-only data processing, check that nobody has enabled them.

About the Author

Photo of Kate Bennett
Kate Bennett

CEO of Disruptive LIVE

As the CEO of Disruptive LIVE, Kate has a demonstrated track record of driving business growth and innovation. With over 10 years of experience in the tech industry, I have honed my skills in marketing, customer experience, and operations management. As a forward-thinking leader, I am passionate about helping businesses leverage technology to stay ahead of the competition and exceed customer expectations. I am always excited to connect with like-minded professionals to discuss industry trends, best practices, and new opportunities.