Eighty-two per cent of European organisations have formally permitted AI use at work. Fewer than half have a policy that governs it. ISACA's 2026 AI Pulse Poll, released today, maps what that gap looks like when threat actors are already exploiting the same technology.
The survey of 681 digital trust professionals across Europe, conducted in February 2026, finds that 35% of respondents cannot say whether their organisation experienced an AI-powered cyberattack in the past 12 months. Seventy-one per cent report that AI-driven phishing and social engineering attacks are now more difficult to detect; 58% say authenticating digital information has become harder as a result of AI.
AI's role in security is not one-directional. Forty-three per cent of respondents say it has improved their ability to detect and respond to threats, and 34% are already deploying AI specifically for that purpose. But realising that defensive potential requires governance infrastructure most organisations have not yet built.
Only 42% have a formal, comprehensive AI policy. A third do not require employees to disclose when AI contributed to a work product, creating audit blind spots across operations. Eighty-seven per cent flag employee misuse as a concern, and 26% cite lack of trust in AI's handling of intellectual property as their primary workplace AI challenge.
Chris Dimitriadis, Chief Global Strategy Officer at ISACA, said: "AI has fundamentally changed the threat landscape. Attackers can now hack at the speed of intent, and too many organisations don't even know whether they've already been on the receiving end. The fact that so many businesses are operating without the governance to see where AI is being used, let alone how, makes that exposure significantly worse."
Workforce readiness is a compounding variable. Over half of respondents (54%) say they need to upskill within six months to retain their position or advance; 21% of organisations provide no formal AI training at all. Among regulatory frameworks, the EU AI Act is the most referenced at 45%, ahead of NIST at 26%. A quarter of organisations follow no framework.
The findings land after UK Technology Secretary Liz Kendall warned business leaders in April that companies had months, not years, to prepare for catastrophic AI-powered attacks, citing Mythos, an AI tool withheld from public release on security grounds.
Dimitriadis added: "Ungoverned AI doesn't just create operational risk. It actively hands an advantage to those who want to cause harm. Closing that gap starts with professional development and advancing the expertise needed to build and embed AI governance that stands up under pressure. Doing so is now a security imperative."
Among the AI applications most widely adopted in European workplaces: creating written content (69%), productivity improvement (63%), automating repetitive tasks (54%), and analysing large datasets (52%). Seventy-seven per cent cite time savings as a tangible benefit. The productive surface and the threat surface have grown in parallel.
AI has fundamentally changed the threat landscape. Attackers can now hack at the speed of intent, and too many organisations don't even know whether they've already been on the receiving end. The fact that so many businesses are operating without the governance to see where AI is being used, let alone how, makes that exposure significantly worse.