ISACA launches Advanced in AI Risk credential as oversight lags AI adoption across Europe
ISACA launches Advanced in AI Risk credential as oversight lags AI adoption across Europe

ISACA, the global professional association for digital trust, has launched a new credential for experienced risk practitioners now being asked to take on AI oversight: Advanced in AI Risk (AAIR). The certification covers AI risk governance, lifecycle risk management and programme management, and is open to candidates already holding one of 25 prerequisite certifications including CISA, CISM, CRISC, CGEIT, CDPSE, CGRC and CISSP.

The launch is paired with new ISACA research suggesting most European organisations have rolled out AI without the governance infrastructure to manage it. Drawn from the 2026 AI Pulse Poll of 681 digital trust professionals across Europe, with fieldwork between 6 and 22 February 2026, the data points to a structural accountability gap rather than a tooling shortfall.

The numbers, in summary:

  • 59% do not know how quickly their organisation could halt an AI system during a security incident

  • Only 21% could halt one within half an hour

  • 20% do not know who would be accountable if an AI system caused harm

  • 42% are confident they could investigate and explain a serious AI incident to leadership or regulators; only 11% are completely confident

  • 33% of organisations do not require employees to disclose when AI has been used

  • 38% identify the board or an executive as the ultimate owner of AI risk

The enthusiasm to adopt AI has outpaced the skills to govern it. Many organisations cannot tell you how quickly they could stop an AI system, who is accountable if it goes wrong, or how they would explain a failure to a regulator. That is not a technology problem - it is a governance and skills problem.

Chris Dimitriadis, Chief Global Strategy Officer at ISACA

ISACA's reading is that AAIR addresses the second half of that equation. The certification is deliberately broad in eligibility, reflecting the fact that AI risk responsibility is being absorbed into roles well beyond traditional IT risk teams: audit, compliance, privacy and security functions are all picking up some share of the work. The exam tests the ability to evaluate AI vulnerabilities before and after deployment, assess business impact across uncertain and evolving systems, and explain AI risk posture credibly to a board or regulator.

The regulatory backdrop is the EU AI Act, which places explicit accountability and oversight requirements on operators of high-risk AI systems. Organisations that cannot answer the basic governance questions in ISACA's poll will, under the Act, struggle to demonstrate the controls regulators expect.

The tools to manage AI risk already exist. Risk management, prevention controls, detection, incident response and recovery are all foundations of good cybersecurity practice, and they need to be applied to AI with the same rigour. AAIR exists to build the profession that can do that work. Closing the governance gap will take more than a handful of experts - we all need to be involved.

Chris Dimitriadis, Chief Global Strategy Officer at ISACA

Supporting materials are available alongside the certification: an AAIR Online Review Course, a Questions, Answers and Explanations Database, and a Review Manual in digital and print form.

The full 2026 AI Pulse Poll is due to be published in May 2026.

More News