Earlier this year, the Director General of MI5 warned Parliament about AI systems capable of evading human oversight. The House of Lords debated what steps the government should take. That was the national security framing. New research from ISACA suggests the same question applies at enterprise level, and the answers are not reassuring.
ISACA's 2026 AI Pulse Poll, drawn from more than 600 IT and business professionals across Europe, found that 59% of respondents do not know how quickly their organisation could halt an AI system in the event of a security incident. Only 21% said they could do so within half an hour.
Operational risk is not abstract
The implications are not abstract. AI is now embedded in core business processes across sectors. A compromised or malfunctioning system that cannot be stopped promptly creates regulatory exposure under the EU AI Act, operational risk to the processes it supports, and reputational damage that travels faster than any remediation effort.
The governance gaps extend beyond the kill switch. Fewer than half (42%) of respondents said they were confident their organisation could investigate and explain a serious AI incident to leadership or regulators. Only 11% were completely confident. A third of organisations (33%) do not require employees to disclose when AI has been used in work products, leaving fundamental gaps in visibility over where and how AI is operating across the business.
Accountability lines are unclear
A fifth of respondents (20%) do not know who would be ultimately accountable if an AI system caused harm. Only 38% identified the board or an executive, which sits at odds with the direction of regulation, where liability is being pushed firmly towards senior leadership.
The challenge is not whether organisations are using AI. They are. The challenge is whether they have built the infrastructure to govern it. That means not just policies on paper, but the technical controls, audit trails, skilled professionals and clear lines of accountability needed to manage AI responsibly.
On the surface, oversight practices offer some comfort: 40% said humans approve most AI-generated actions before execution, with a further 26% reviewing decisions after the fact. But without broader governance, clear accountability and trained staff, human-in-the-loop review risks becoming a rubber stamp rather than a genuine safeguard.
ISACA has 195,000 members across 190 countries. The full 2026 AI Pulse Poll results are due later this year.