Western Digital Validates AI Storage with MLPerf V2 Results

Western Digital’s OpenFlex Data24 NVMe-oF Storage Platform extends the high performance of NVMe® flash over Ethernet fabric to enable low-latency shared storage for scalable, disaggregated AI infrastructure.

author-image
SMEStreet Edit Desk
New Update
Western Digital shares offers for Prime Day and G.O.A.T  Sale 2025
Listen to this article
0.75x 1x 1.5x
00:00 / 00:00
As AI workloads grow in complexity and scale, the ability of storage systems to keep pace with accelerated compute infrastructure has become a critical factor in overall performance. Western Digital has announced its MLPerf® Storage V2 submission results, validating the real-world capabilities of its OpenFlex™ Data24 4000 Series NVMe-oF™ Storage Platform. The results affirm OpenFlex Data24 EBOF’s (Ethernet bunch of flash) ability to meet the rigorous demands of modern AI workloads, delivering high performance, efficiency and scalability in a cost-effective solution for modern AI infrastructure. 

Real-World Testing for AI at Scale

Western Digital’s OpenFlex Data24 NVMe-oF Storage Platform extends the high performance of NVMe® flash over Ethernet fabric to enable low-latency shared storage for scalable, disaggregated AI infrastructure. Designed to simplify deployment, reduce cost and grow with GPU demand, the OpenFlex Data24 provides the capability of scaling storage and compute independently for more flexibility.
 
To reflect realistic and demanding deployment scenarios where storage systems must keep pace with accelerated GPU infrastructure, Western Digital collaborated with PEAK:AIO, a high-performance SDS provider with the ability to ingest, stage and serve large volumes of data at high speeds.
 
The validation submission utilized KIOXIA CM7-V Series NVMe SSDs, selected for their outstanding performance characteristics in demanding AI workloads. When deployed in the OpenFlex Data24 enclosure, they enable sustained, high-performance disaggregated data delivery to many GPU client-nodes.
 

MLPerf Storage V2 Benchmark Results

MLPerf is widely regarded as the industry’s gold standard for AI benchmarking. Western Digital’s MLPerf Storage V2 results showcase how this architecture not only delivers performance at scale but does so with a focus on efficiency and practical deployment economics with and without a software-defined storage (SDS) layer.
 
MLPerf Storage uses GPU client nodes – systems that simulate the behavior of an AI server accessing storage during training or inferencing to generate the I/O load patterns typical of real-world GPU workloads – to evaluate how well a storage platform supports distributed AI environments across multiple concurrent GPU clients. The AI training tests used from the MLPerf storage suite measure how effectively the system serves AI workloads that stress different aspects of storage I/O, including throughput and concurrency, across various deep learning models. There are two key workload benchmarks used for MLPerf: 

3D U-Net Workload

3D-UNet is a deep learning model used in medical imaging and volumetric segmentation. It places a much heavier load on storage systems due to its large, 3D input datasets and intensive data-streaming read patterns. As such, it is a more stringent benchmark for demonstrating sustained high-bandwidth and low-latency performance across multi-node AI workflows.
 
In this model:
  • Western Digital’s OpenFlex Data24 achieved sustained read throughput of 106.5 GB/s (99.2 GiB/s), saturating 36 simulated H100 GPUs across three physical client nodes demonstrating the EBOF’s ability to handle bandwidth-intensive, high-parallelism training tasks with ease.
  • With the PEAK:AIO AI Data Server, OpenFlex Data24 was able to deliver 64.9 GB/s (59.6 GiB/s), saturating 22 simulated H100 GPUs from a single head server and single client node.

 ResNet50 Workload

 
ResNet-50 is a widely used convolutional neural network designed for image classification. It serves as a benchmark for training throughput, representing a balanced mix of compute and data movement. With both random and sequential I/O patterns, using medium-sized image reads, it is useful in evaluating how well a system handles high-frequency access to smaller files and rapid iteration cycles. 
 
In this model:  
  • Western Digital’s OpenFlex Data24 delivered optimal performance across 186 simulated H100 GPUs and three client nodes, with an outstanding GPU-to-drive ratio that reflects the platform’s efficient use of physical media.
  • With the PEAK:AIO AI Data Server, OpenFlex Data24 was able to saturate 52 simulated H100 GPUs from a single head server and single client node.
 
“These results validate Western Digital’s disaggregated architecture as a powerful enabler and cornerstone of next-generation AI infrastructure, maximizing GPU utilization while minimizing footprint, complexity and overall total cost of ownership,” said Kurt Chan, vice president and general manager, Western Digital Platforms Business. “The OpenFlex Data24 4000 Series NVMe-oF Storage Platform delivers near-saturation performance across demanding AI benchmarks, both standalone and with a single PEAK:AIO AI Data Server appliance, translating to faster time-to-results and reduced infrastructure sprawl.”
 
“These MLPerf results spotlight the breakthrough efficiency achieved by combining PEAK:AIO’s software-defined AI Data Server with the scalability of Western Digital’s OpenFlex Data24 and the performance density of KIOXIA’s CM7-V Series SSDs,” said Roger Cummings, President and CEO at PEAK:AIO. “Together, we’re delivering high-performance AI infrastructure that’s faster to deploy, more efficient to operate, and easier to scale. It’s a compelling proof point that high performance no longer requires high complexity.”
 
Whether organizations are just beginning their AI journey or scaling to hundreds of GPUs, Western Digital’s OpenFlex Data24 with industry-leading connectivity using Western Digital RapidFlex™ network adapters enables up to 12 hosts to be attached without a switch. The data storage platform offers simplified, predictable, high-performance AI-infrastructure growth without the upfront costs or power demands of some other solutions making it ideal for organizations to scale AI workloads with confidence.
Western Digital MLPerf V2 Results