Hammerspace upgrades AI Anywhere data platform software, with  performance, security and ecosystem enhancements

Floyd Christofferson, VP of Product Marketing at Hammersmith.

Hammerspace, which makes a high-performance data platform for AI Anywhere, has announced the upcoming release of Hammerspace v5.2, delivering performance, security and ecosystem enhancements that help organizations unify, automate and accelerate their AI and high-performance workloads across any on-premises, hybrid or cloud-based infrastructure.

The v5.2 release lets Hammerspace raise the bar on standards-based parallel file system performance, particularly for AI and HPC workloads, continuing the trajectory demonstrated in public benchmarks earlier this year. The new release achieved a 33.7% higher IO500 overall score than results on the previous version published five months ago, with total bandwidth doubling and individual sub-tests showing dramatic improvements — including an over 800% gain in IOR-Hard-Read. A key component of these performance improvements is Hammerspace’s continued contribution of significant client-side NFS performance enhancements to the standard Linux kernel, improvements specifically designed to accelerate AI and HPC workloads. By tightly integrating Hammerspace software with these upstream kernel advancements, the Data Platform delivers dramatic performance gains without requiring customers to install proprietary software on application servers or trap their data into vendor-locked silos. This standards-based approach means Hammerspace is compatible with any storage platform, enabling customers to adapt and deliver the performance and low latency needed for new workloads such as training, inference or RAG with existing infrastructure and data sets. This approach eliminates the cost and complexity of migrating data to net-new storage silos to launch AI projects.

In addition to the baseline performance gains, v5.2 introduces Tier 0 affinitization, adding locality-aware intelligence to Tier 0 deployments. By automatically aligning data placement with the optimal servers within a GPU cluster, Tier 0 affinitization reduces east-west network traffic to accelerate throughput and simplifies Tier 0 deployments by eliminating the need for manual configuration. The feature is automatic, transparent and enabled by default.

AI and HPC clusters run best when the data they need is as close as possible to the GPUs and CPUs doing the work. In most environments, that means the NVMe SSDs sitting inside each compute server. They’re the fastest, lowest-latency storage possible, and yet much of that performance is stranded, isolated in tiny silos inside individual nodes.

“Hammerspace Tier 0 was created to activate that stranded performance by turning those local NVMe devices into a shared file system the entire cluster can use,” said Floyd Christofferson, VP of Product Marketing at Hammersmith. “Building on the improvements Hammerspace introduced into the Linux kernel in 2024 to further improve Tier 0 performance, the new Hammerspace v5.2 release contains a related enhancement called Tier 0 affinitization that adds locality-awareness to Tier 0 installations.”

This new capability is automatic, transparent, and enabled by default. It builds on the NFS performance enhancements Hammerspace contributed to the Linux kernel in 2024, and, importantly, does not require any proprietary client software or kernel patches on compute servers.

“Compute servers that are clustered for AI and HPC workloads each typically contain a few NVMe SSDs,” Christofferson said. “This storage is the fastest and lowest latency available, residing on the same PCIe bus with the CPUs and GPUs. But because this capacity is broken into many small silos across the servers, it is largely unused. This is a missed opportunity.

“Hammerspace Tier 0 activates these formerly isolated islands of storage, bringing them together into a pNFS-mountable shared file system usable by the entire cluster,” Christofferson emphasized. “When combined with additional tiers of storage and Hammerspace data orchestration, Tier 0 becomes a standards-based high-performance foundation for data-intensive workflows.”

Tier 0 delivers the best possible performance when a compute node can use its own local Tier 0 volume for I/O, instead of crossing the network to read or write to another node’s NVMe. To make that happen, the Anvil needs to recognize when the pNFS client requesting a layout is also hosting a Tier 0 storage volume and then place that local volume first in the layout.

“Previously, the way to improve locality was to configure multiple directories in the file system and apply specific Hammerspace policy objectives to those directories to guide data placement,” Christofferson said. “It worked, but it added configuration steps and ongoing management, especially as environments scaled.

Tier 0 affinitization removes that manual work. Starting in Hammerspace v5.2, the Anvil automatically detects when a client requesting I/O has a local Tier 0 storage volume. When it sees that, it places that local volume at the top of the list in the pNFS layout for that client. The result: as much I/O as possible is kept local to the requesting node. This maximizes the performance of Tier 0 while keeping the behavior completely automatic and transparent. No extra configuration, no directory-by-directory tuning, and no changes required on the compute servers themselves.

To support this extreme scale, v5.2 adds Share Referrals, a transparent mechanism that distributes the namespace across as many metadata servers as are needed to accommodate extreme file counts. This enhancement ensures linear scalability, so performance and responsiveness remain steady even as data estates for AI and HPC environments explode.

“AI is fundamentally changing how organizations interact with their data,” said Molly Presley, SVP Global Marketing at Hammerspace. “Workloads that were once separate are now deeply interconnected, and the data platform must keep pace. The v5.2 advancements strengthen our ability to unify and accelerate data for AI, HPC and enterprise environments without requiring customers to rebuild storage silos or redesign their infrastructure. It marks another important step toward enabling truly AI-ready data everywhere.”

The release also strengthens security options with the addition of Kerberos authentication and Labeled NFS support. By enabling SELinux and other Mandatory Access Control (MAC) systems to transport and enforce security labels across NFS, organizations gain consistent, fine-grained control over data access, which is essential for sensitive research, government and regulated industries.

Hammerspace v5.2 will further expand the platform’s reach by adding support for running Hammerspace in Oracle Cloud Infrastructure (OCI).  New shapes, including bare metal, will be supported, and support for OCI dedicated Regions will follow, providing a critical option for customers that must maintain strict data sovereignty across distributed environments.

This tight OCI integration extends Hammerspace’s multi-site, multi-cloud and multi-protocol capabilities, including its unique S3-connector technology, so customers can seamlessly bridge on-premises environments to cloud-based GPU-accelerated compute clusters in OCI, AWS and Azure. In this way, NFS-based applications gain native, transparent access to cloud compute resources without workflow changes or moving data into new silos.

This seamless hybrid cloud flexibility is what enables organizations such as Meta to burst extreme-performance AI workloads between on-premises data centers and GPU clusters in OCI, with data movement orchestrated among storage types and locations transparently in the background. At the same time, Hammerspace’s global namespace maintains consistent access for users and applications.

With Hammerspace v5.2, Tier 0 becomes easier to deploy and more effective in practice. The new affinitization capability:

  • Keeps more I/O local to each compute node
  • Reduces the need for manual configuration and directory-based tuning
  • Preserves the standards-based NFS approach, avoiding proprietary client software

For operators of AI and HPC clusters, this means you can more fully exploit the performance potential of the NVMe devices you already own.

Hammerspace v5.2 will be generally available in December.