Akamai Acquires Nvidia Blackwell GPUs to Power Distributed AI Inference
Akamai Technologies has announced the acquisition of Nvidia Blackwell GPUs to expand its distributed cloud platform for AI inference workloads. The move positions Akamai to serve the rapidly growing demand for inference computing — which Deloitte estimates will account for two-thirds of all AI compute by 2026.
Unlike traditional centralised GPU cloud offerings, Akamai's approach distributes inference capacity across its global edge network, bringing AI processing closer to end users. The company argues this architecture reduces latency and improves the economics of real-time AI applications.
The Blackwell platform represents Nvidia's latest generation of data centre GPUs, designed specifically for large-scale AI training and inference. Akamai's deployment strategy signals a broader industry shift: as AI models move from training to production, the compute bottleneck is increasingly at the inference layer rather than training.

