HPE, NVIDIA expand strategic partnership with new enterprise computing solution for Generative AI

The new joint solution is designed as a turnkey solution for enterprises who don’t want to develop their own AI models.

Manuvir Das, VP Enterprise Computing, NVIDIA

The broad theme at today’s HPE Discover Barcelona 2023 event which begins today, is Hewlett Packard Enterprise’s announcement of their AI-native architecture for Generative AI, along with enhancements to a variety of solutions which make use of the new technology. The major new offering announced is the product of a new collaboration with long-time partner NVIDIA that will deliver an enterprise computing solution for generative AI early in 2024.

“As enterprises deal with both the opportunity and competitive threat of Generative AI, we believe that most will not develop their own models, but will take ones developed elsewhere, and utilize those instead,” said Neil McDonald EVP/GM HPE Compute. “That is because the speed at which enterprises can transform  part of their operations to GenAI is critical. So we have co-developed with NVIDIA this out of box solution to take pertinent models and deploy them into their environment.”

The co-engineered, pre-configured AI tuning and inferencing solution will let  enterprises of any size quickly customize foundation models using private data and deploy production applications anywhere, from edge to cloud.

“In the last year it has become clear that there is an AI use case that applies to everyone,” said Manuvir Das, VP Enterprise Computing, NVIDIA. “There are lots of teams in a company doing this work kind of work now. That means that you need a platform – with the hardware and the software – that HPE has created.”

The HPE hardware that goes into the joint solution was purpose-built and optimized for AI. The rack-scale architecture features the HPE ProLiant Compute DL380a pre-configured with NVIDIA L40S GPUs, NVIDIA BlueField-3 DPUs and the NVIDIA Spectrum-X Ethernet Networking Platform for hyperscale AI. The solution was sized to fine-tune a 70 billion-parameter Llama 2 model and includes 16 HPE ProLiant DL380a servers and 64 L40S GPUs

The HPE AI software involved includes the HPE Machine Learning Development Environment with new generative AI studio capabilities to rapidly prototype and test models, and HPE Ezmeral Software with new GPU-aware capabilities to simplify deployment and accelerate data preparation for AI workloads across the hybrid cloud

NVIDIA’s own AI software includes the NVIDIA NeMo framework, guardrailing toolkits, data curation tools and pretrained models to streamline enterprise GenAI.

“This joint solution is designed to hit the sweet spot of enterprise use cases,” McDonald said. It is not just hardware, but also the machine learning development environment, Ezmeral, and NVIDIA. This combination between HPE and NVIDIA  is the fastest past to deploy Generative AI because it simplifies the prepackaged and integrated environment.”

HPE also announced a turnkey supercomputing solution powered by NVIDIA for large enterprises, research institutions and government organizations to address the first phase of the AI lifecycle –  developing and training foundational models. The enterprise computing solution for generative AI is a smaller form-factor AI solution for enterprise customers that are focused on tuning and inferencing.

The enterprise computing solution for generative AI will be orderable in Q1 of calendar year 2024.