Standard server memory has become a secondary market for the major manufacturers. With hyperscalers having locked up most available High-Bandwidth Memory capacity for AI workloads, the knock-on for enterprise IT is a 90% price surge and lead times that have stretched past forty weeks.
Counterpoint Technology Market Research recorded a roughly 90% rise in DRAM prices across Q1 2026, and the cause traces directly to a structural shift in how the largest memory manufacturers allocate their production capacity. The demand from AI hardware for High-Bandwidth Memory (HBM) has been intense enough that manufacturers have redirected up to a quarter of their capacity away from standard Dynamic RAM. That reallocation has not been temporary.
Microsoft, Google and Meta have used their capital reserves to secure long-term HBM supply contracts, effectively guaranteeing that supply will remain concentrated at the hyperscale tier. For mid-market businesses and smaller enterprises, the remaining standard DRAM inventory is shared and scarce.
The practical consequence is a queue. Large DRAM orders now regularly exceed forty weeks, and infrastructure projects planned for 2026 have been rescheduled into 2027 or 2028. The problem is compounded because businesses cannot simply extend the useful life of existing hardware indefinitely: legacy technology is being phased out faster than expected, particularly in industrial, medical, and automotive sectors that depend on older equipment. Businesses that had anticipated a straightforward equipment refresh are now facing a procurement gap on both ends simultaneously.
Steve Spittal, Technology Director at Pulsant, identifies concentration risk as the amplifying factor. Businesses that relied on a small number of suppliers are feeling the biggest impact, because vendors are now prioritising higher-margin AI customers and in some cases dropping lower-volume accounts. Organisations that cannot lock in eighteen months of requirements at current prices face either cancelling digital infrastructure plans or fundamentally reworking how those projects will be delivered.
IaaS offers one practical route around the shortage. Capacity is immediately available from colocation and infrastructure-as-a-service providers that have secured supply in advance of the crunch. The commercial logic shifts: instead of committing capital to purchase peak-priced memory and carrying the risk of further price moves, businesses pay for what they need on a consumption basis. Spittal notes that some providers, including Pulsant, have secured enough inventory to cover near-term growth targets.
The timeline for normalisation remains contested. Samsung has announced plans to increase HBM capacity by 50% in 2026; SK Hynix has begun mass production of HBM4. Those moves do not address standard DRAM supply directly. If AI demand eases and additional production facilities come online as planned, analysts suggest late 2027 could see some supply chain stabilisation. The mid-market cannot count on it arriving earlier.