DDN Redefines AI and High-Performing Computing at ScaleDDN Redefines AI and High-Performing Computing at Scale with Google Cloud Managed Lustre InnovationsNew Google Cloud Managed Lustre capabilities with DDN EXAScaler improves AI training, inference, and high-performance computing, delivering scale, performance, and economicsDDN announced innovations involving Google Cloud Managed Lustre, unveiled at Google Cloud Next 2026. Built on DDN’s proven Lustre expertise, EXAScaler, and delivered in collaboration with Google Cloud, these advancements redefine what’s possible for AI training, inference, and high-performance computing in the cloud. With performance scaling to 10 terabytes per second, Google Cloud Managed Lustre delivers improved throughput, elasticity, and cost efficiency—enabling enterprises to run the world’s most demanding AI and HPC workloads. The launch underscores DDN’s vision to power the full AI lifecycle—from training and fine-tuning to inference and large-scale simulation—through a unified, high-performance data platform. “This is not just a product milestone—it’s a market-shaping moment,” said Alex Bouzari, CEO at DDN. “We are delivering one of the fastest-growing, highest-performance managed Lustre services in the industry, purpose-built for the realities of modern AI at scale. This announcement reinforces DDN’s leadership in AI data platforms and our shared commitment to helping customers innovate faster, at lower cost, and with greater confidence.” Built for the Next Generation of AI Google Cloud Managed Lustre provides a POSIX-compliant, parallel file system that delivers high throughput and low latency. Customers across industries—including AI, financial services, robotics, autonomous systems, and advanced research—are rapidly adopting the platform to power:
A key innovation unveiled at Google Cloud Next is the use of Managed Lustre as a shared KV-cache for AI inference, dramatically improving performance and economics. By leveraging Lustre’s ultra-low latency and high aggregate throughput, customers can avoid redundant computation and scale inference across clusters with virtually unlimited shared cache capacity. In benchmark testing, this approach delivered:
The result is faster, more responsive AI applications—and significantly lower cost of inference at scale. A Collaboration Driving Cloud-Scale Performance For the offering, DDN combines long-standing Lustre expertise and extreme-scale data systems with Google Cloud’s elastic infrastructure, innovations in compute and Hyperdisk, global reach, and access to cutting-edge accelerators, including TPUs. “Managed Lustre enables us to scale AI model training for AFEELA Intelligent Drive by 3x compared to other Google Cloud solutions,” said Motoi Kataoka, Senior Manager, AI & Data Analytics Platform, Sony Honda Mobility Inc. New capabilities announced at Google Cloud Next also include a single, dynamic hot and cold tier, designed to deliver high performance for hot data with dramatically improved economics—eliminating the complexity, performance cliffs, and SKU sprawl common in competing tiered storage solutions. Setting the Pace for the Industry With rapid customer adoption, explosive capacity growth, and performance milestones, the combination of DDN and Google Cloud Managed Lustre is setting a new benchmark for AI and HPC in the cloud. “This is what happens when deep infrastructure expertise meets cloud-scale innovation,” said Kirill Tropin, Group Product Manager at Google Cloud. “Our partnership with DDN enables customers to run their most demanding AI workloads with the performance, scale, and simplicity they need—today and into the future.” Source: DDN media announcement | |