
VDURA, a software-defined storage company that develops data infrastructure for AI and HPC (high-performance computing) has announced the launch of its first scalable AMD Instinct GPU reference architecture in collaboration with AMD. The new validated blueprint defines how compute, storage and networking should be configured for efficient, repeatable large-scale GPU implementations. The design combines the VDURA V5000 storage platform with AMD Instinct MI300 Series Accelerators to eliminate performance bottlenecks and simplify deployment for the most demanding AI and HPC environments.
VDURA makes a very rigorous data platform for AI and high-performance computing, blending flash-first speed with true hyperscale capacity and an industry-leading 12-nines durability commitment.
“Our recent VDURA V11 release, with its new microservices-based architecture, gives us unparalleled flexibility to rapidly adopt and optimize new hardware platforms,” said Ken Claffey, CEO of VDURA. “Hot on the heels of launching our industry-leading V5000 Hybrid Solution that combines HDD and SSD technologies, today we’re introducing the newest member of the family—the V5000 All-Flash Appliance.”
Purpose-built for breakthrough AI performance, the V5000 All-Flash eliminates storage bottlenecks, accelerates pipelines, supports write intensive checkpointing and delivers effortless scalability with zero-downtime availability.
“Publishing our first scalable reference architecture with AMD Instinct MI300 Series Accelerators underscores our shared commitment to leading next-generation AI infrastructure,” Claffey said.” AI workloads aren’t static. They are constantly evolving, growing in scale and complexity. AI Data Infrastructure must be adaptable — scaling in every dimension, from throughput, IOPS, latency, and metadata performance to capacity, availability, and durability. The V5000 platform was designed for this very future, ensuring enterprises can scale seamlessly as AI demands evolve over time.”
AI and HPC pipelines are increasingly limited by storage that cannot keep pace with growing data volumes. This slows GPU utilization, increases energy costs and reduces overall efficiency. The new reference architecture is engineered to keep AMD Instinct GPUs fully utilized, delivering sustained performance with a design that is efficient, expandable and simple to operate.
That’s why following a technical evaluation, AMD selected VDURA for its AMD Instinct GPU-optimized performance, low client overhead and proven ability to scale. The solution has already been chosen for a U.S. federal systems integrator AI supercluster, demonstrating its readiness for mission-critical workloads.
“This provides a clear blueprint for customers looking to maximize AMD Instinct GPU performance and simplify large-scale deployment,” Claffey stated.
The reference architecture provides compute, storage and networking at scale. It supports 256 AMD Instinct GPUs per scalable unit, achieves throughput of up to 1.4 TB/s and 45 million IOPS in an all-flash layout, and delivers around 5 PB of usable capacity in a 3 Director and 6 V5000 node configuration. Data durability is assured through multi-level erasure coding, while networking options include dual-plane 400 GbE and optional NDR/NDR200 InfiniBand.
This strength is reflected in what VDURA is managing to put together
“Just recently returned from Germany after an incredible visit with the team at Goethe – Center of Science Computing, where our V5000 system is now fully deployed and powering their next-gen AI and HPC infrastructure which is one of the largest AMD GPU clusters in Europe,” Claffey detailed. “Seeing our platform in action — delivering the high-performance and capacity required for their leading AI physics research — is what drives the entire VDURA team.”
The modular design allows organizations to add Director Nodes for extra performance, expand with all-flash storage for more bandwidth, or combine flash and HDD capacity for cost-effective growth, all within a single namespace.
“Now, enterprises can rapidly build customized AI models uniquely aligned with their business objectives, achieving superior results, and gaining significant competitive advantage,” Claffey said.
“I’m proud to contribute to the conversation in HPC and AI, shaping the future of our industry,” Claffey concluded. “The pace of change in this space is remarkable, and I look forward to continuing to drive meaningful progress alongside so many other leaders in the channel.”
