Robot and human hands meeting representing AI trust
Robot and human hands meeting representing AI trust

The EU AI Act is moving from theoretical guidance to enforceable regulation, and 63 per cent of business leaders across the UK, France and Germany say they feel unprepared to lead in an AI-enabled world. That figure, from a recent industry study, points to gaps not in technology adoption but in governance structures and organisational mindset.

Mark Appleton, Group Lead for Vendor Ecosystem Development at ALSO, argues that trust is shifting from a brand attribute to a compliance requirement.

As the EU AI Act becomes fully enforceable, the era of opaque algorithmic decision-making is ending. Regulators now expect evidence in the form of audit logs, model documentation, and human oversight frameworks.

Mark Appleton, Group Lead Vendor Ecosystem Development, ALSO

His argument extends beyond regulatory compliance into operational security. AI-generated phishing can now mimic individual writing styles; deepfake video undermines remote identity verification. Traditional defences such as passwords, one-time codes and document uploads are increasingly insufficient. Appleton contends that security and AI governance should be treated as a single framework rather than separate disciplines.

Businesses that don’t risk an erosion of customer confidence and ecosystem exclusion.

Mark Appleton, Group Lead Vendor Ecosystem Development, ALSO

The practical prescription: standards-based infrastructure, real-time validation and dynamic risk assessment, integrated with AI governance from the outset rather than bolted on afterwards.