HPE helps enterprises drive agentic and physical AI innovation with systems accelerated by NVIDIA Blackwell and the latest NVIDIA AI models

The new announcements increase integration with NVIDIA AI Enterprise and bring the newest NVIDIA AI models and NVIDIA Blueprints to HPE Private Cloud AI.

HPE has made significant advancements to its NVIDIA AI Computing by the HPE portfolio that supports enterprise customers of all sizes throughout the entire AI lifecycle. These developments bolster integration with NVIDIA AI Enterprise and bring the newest NVIDIA AI models and NVIDIA blueprints to HPE private Cloud AI, enabling developers to deploy AI applications with ease. HPE will also ship HPE private Cloud AI accelerates development and deployment with the newest NVIDIA AI models and blueprints for agentic AI and physical AI.

HPE has announced significant advancements to its NVIDIA AI Computing by HPE portfolio that supports enterprise customers of all sizes throughout the entire AI lifecycle. These developments bolster integration with NVIDIA AI Enterprise and bring the newest NVIDIA AI models and NVIDIA blueprints to HPE private cloud AI, enabling developers to deploy AI applications with ease. HPE will also ship HPE ProLiant Compute servers that feature NVIDIA Blackwell accelerated computing to advance generative, agentic and physical AI workloads.

HPE ProLiant Compute servers accelerated by the NVIDIA Blackwell architecture will be available time-to-market. This includes two NVIDIA RTX PRO Server configurations – the HPE ProLiant DL385 and the Gen11 server. Supporting up to two NVIDIA RTX PRO 6000 Blackwell Server Edition GPUs in the new 2U RTX PRO Server form factor, the NVIDIA RTX PRO 6000 Blackwell Server air-cooled server is well-suited for data centres tasked with meeting the growing AI demands of the enterprise.

HPE ProLiant Compute servers provide organizations with the flexibility and power to innovate across the enterprise, helping unlock new levels of productivity, security, and operational efficiency. HPE ProLiant Compute Gen12 servers feature multi-layered security with HPE Integrated Lights Out (iLO) 7 Silicon Root of Trust and a secure enclave that enables tamper-resistant protection and quantum-resistant firmware signing. Centralized, cloud-native lifecycle automation delivered through HPE Compute Ops Management reduces IT hours spent on server management by up to 75% and downtime by 4.8 hours per server annually. Target workloads include generative and agentic AI, along with physical AI, including robotics and industrial use cases. They also include visual computing, such as quality control monitoring and autonomous vehicle, simulation; 3D modeling, digital twins, and enterprise applications. HPE recently announced the next generation of HPE Private Cloud AI that will be available later this year. This includes support for NVIDIA RTX PRO 6000 GPUs with HPE ProLiant Compute Gen12 servers, seamless scalability across GPU generations, air-gapped management, and enterprise multi-tenancy.

The HPE ProLiant Compute DL380a Gen12 server supports up to 8 NVIDIA RTX PRO 6000 GPUs in a 4U form factor. This previously announced configuration will ship in September.

HPE Private Cloud AI adds support for new NVIDIA reasoning models and video blueprint. HPE Private Cloud AI, a turnkey AI factory solution for the enterprise co-developed with NVIDIA, will support the latest versions of the NVIDIA Nemotron models for agentic AI, Cosmos Reason vision language model (VLM) for physical AI and robotics, and the NVIDIA Blueprint for Video Search and Summarization (VSS 2.4). All of these can be used to build video analytics AI agents that can extract valuable insights from massive columns of video data. Through continuous co-development between HPE and NVIDIA, HPE Private Cloud AI is uniquely designed to deliver the fastest deployment of NVIDIA NIM microservices for the latest AI models and NVIDIA Blueprints—accessible by customers through HPE AI Essentials.

Purpose-built to handle diverse workloads and meet growing enterprise IT demand for GPU-accelerated compute power, centralized, cloud-native lifecycle automation delivered through HPE Compute Ops Management reduces IT hours spent on server management by up to 75% and downtime by 4.8 hours per server annually. Target workloads include generative and agentic AI; along with physical AI, including robotics and industrial use cases; visual computing, such as quality control monitoring and autonomous vehicles; simulation; 3D modeling; digital twins; and enterprise applications. HPE recently announced the next generation of HPE Private Cloud AI that will be available later this year. This includes support for NVIDIA RTX PRO 6000 GPUs with HPE ProLiant Compute Gen12 servers, seamless scalability across GPU generations, air-gapped management, and enterprise multi-tenancy.

HPE Private Cloud AI adds support for new NVIDIA reasoning models and video blueprint

HPE Private Cloud AI, a turnkey AI factory solution for the enterprise co-developed with NVIDIA, will support the latest versions of the NVIDIA Nemotron models for agentic AI, Cosmos Reason vision language model (VLM) for physical AI and robotics, and the NVIDIA Blueprint for Video Search and Summarization (VSS 2.4) to build video analytics AI agents that can extract valuable insights from massive columns of video data. Through continuous co-development between HPE and NVIDIA, HPE Private Cloud AI is uniquely designed to deliver the fastest deployment of NVIDIA NIM microservices for the latest AI models and NVIDIA Blueprints – accessible by customers through HPE AI Essentials.