81% of AI Fraud Cases in 2025 Relied on Deepfakes
81% of AI Fraud Cases in 2025 Relied on Deepfakes

Cybernews has analysed the AI Incident Database and found that deepfakes were the leading type of AI incident in 2025, with 81% of all AI fraud cases relying on deepfake technology.

The research highlights the rapidly growing threat posed by AI-generated synthetic media in financial fraud, identity theft and social engineering attacks.

Key Findings

  • 81% of AI fraud cases in 2025 involved deepfake technology.

  • Deepfakes are now the most common tool used in AI-facilitated financial fraud.

  • The accessibility of deepfake generation tools has lowered the barrier to entry for cybercriminals.

  • Identity verification systems face increasing challenges from AI-generated synthetic media.

The findings underscore the urgent need for organisations to invest in advanced deepfake detection capabilities and multi-layered identity verification processes.