AI brain maze highlighting governance and security risks
AI brain maze highlighting governance and security risks

More than half of enterprises have fully or partially deployed generative AI, but fewer than one in five have reached what researchers term AI maturity in cybersecurity, according to a global study of 1,878 IT and security practitioners published by OpenText and the Ponemon Institute.

The report, Managing Risks and Optimising the Value of AI, GenAI and Agentic AI, surveyed organisations across North America, Asia-Pacific, Europe, the Middle East, Africa, and Latin America. Its central finding is a widening gap between the pace of AI adoption and the governance structures meant to keep it in check.

Only 43 per cent of respondents have adopted a risk-based AI governance approach. Fewer still, 41 per cent, have AI-specific data privacy policies. Nearly six in ten said AI makes compliance with existing privacy and security regulations harder, not easier.

Trust gaps limiting effectiveness

Just 51 per cent rated AI as effective at reducing the time to detect anomalies or emerging threats. Nearly two thirds said minimising model and bias risks, including unfair or discriminatory outputs, remains very or extremely difficult. Fewer than half believed their AI models could learn robust norms and make safe decisions autonomously.

AI maturity isn't just about adopting AI tools; it's about doing it responsibly. Security and governance are foundational to getting real value from AI.

Muhi Majzoub, EVP Product and Engineering, OpenText

The study was conducted in November 2025 across organisations of varying sizes in financial services, healthcare, technology, energy, and manufacturing.