AI/ML
HPE adds Blackwell, Rubin systems to Nvidia-backed AI push
HPE has expanded its Nvidia-based AI portfolio with new systems built on Blackwell and upcoming Rubin GPUs, alongside updates to its Alletra Storage MP X10000, which it claims is the first object storage platform to achieve Nvidia-Certified Storage validation.
The company is also announcing new Nvidia-powered AI Factory and Supercomputing offerings, which include AI grids and enable sovereign AI in Europe and the US.
HPE president and CEO Antonio Neri said: "The AI race is fundamentally about speed, scale, and trust. Our industry leadership across cloud, networking, and AI enables organizations to operationalize AI securely, efficiently, and at an unprecedented scale. Together with Nvidia, HPE delivers turnkey AI factories and networks that transform AI ambitions into real enterprise value."
HPE says it's the first vendor to achieve Nvidia-Certified Storage validation for object-based systems at the Foundation level with the Alletra Storage MP X10000. Nvidia has validated and benchmarked the array’s performance for workloads of up to 128 GPUs, conducted functional tests for enterprise-grade availability and reliability, and confirms that the storage layer efficiently feeds data to accelerated computing resources to deliver faster model training, lower latency inference, and better overall utilization.
An HPE blog discusses how the Alletra Storage MP X10000 and Nvidia RDMA for S3 accelerate AI pipelines, RAG, and real-time inference with low-latency, high-throughput, GPU-direct data paths. It contains this diagram illustrating the X10000's support for AI pipeline stages:
The X10000 is central to HPE's storage activities in AI. It has indexed terabyte-scale vector data in under an hour with the X10000, Nvidia cuVS CAGRA GPU-accelerated vector indexing, and cuObject for accelerated storage I/O. Another blog explains how it did this, showing a 17x improvement in index build time and 8x improvement in total end-to-end pipeline transport using a single Nvidia H100 and accelerated remote direct memory access (RDMA).
HPE says it's evolving the X10000 to centralize intelligent data handling and optimize how AI workloads ingest, process, and deliver data. The company will be supporting the new Nvidia STX rackscale reference architecture to develop new AI storage offerings powered by Vera Rubin accelerators, BlueField-4 DPUs, Spectrum-X networking, ConnectX NICs, and Nvidia's AI software.
HPE GTC news
At the supercomputing event, HPE also announced a range of additional enterprise and edge-focused products:
- HPE is expanding HPE Private Cloud AI, its turnkey enterprise AI factory co-engineered with
Nvidia, to deliver greater performance, scalability, and flexibility for enterprise inference.
- New network expansion racks so HPE Private Cloud AI deployments can scale up to 128 GPUs.
- The large HPE Private Cloud AI system is now available in an air-gapped configuration, ensuring sensitive data is not exposed to external networks.
- HPE ProLiant Compute DL380a Gen12 servers and HPE Private Cloud AI systems based on the DL380a are being certified for Fortanix Confidential AI, a joint solution leveraging Nvidia Confidential Computing for secure on-premises deployments of AI models and processing of sensitive data without exposure.
- CrowdStrike delivers agentic security for HPE Private Cloud AI.
- HPE Private Cloud AI delivers a pre-configured hardware and software stack featuring the latest Nvidia AI Enterprise software and blueprints, including the updated AI‑Q blueprint for AI agents and new Omniverse blueprint for digital twins. The latest AI-Q blueprint enables developers to build customizable AI agents that they own, inspect, and control.
- HPE is updating HPE Private Cloud AI, the latest HPE ProLiant servers and HPE AI factories to support the latest Nemotron open models – part of the Nvidia Agent Toolkit – to simplify deployment of secure, on‑prem and sovereign infrastructure and quickly deliver scalable, production‑ready outcomes.
- RTX PRO 6000 Blackwell Server Edition GPUs are available across all configurations of HPE's Private Cloud AI and AI factory solutions.
- HPE is adding the new RTX PRO 4500 Blackwell Server Edition GPU to ProLiant servers for edge deployments, small-language models, vector databases, and data analytics workloads.
- HPE is developing new products built on RTX 4500 Blackwell GPUs, including integration of the Retail Shopping Assistant Blueprint to streamline deployment across the retail sector.
- HPE is also expanding the portfolio of ProLiant Compute servers that feature the RTX PRO 6000 Blackwell Server Edition GPU.
- New Nvidia co-designed multi-workload offerings simplify deployment of AI use cases for autonomous edge intelligence, retail shopping assistance, video search and summarization, and biomedical research.
The multi-workload offerings combine ProLiant Compute servers with Nvidia accelerated computing, Spectrum-X Ethernet networking, BlueField DPUs, and Connect-X NICs, and also incorporate Nvidia’s software, CUDA-X libraries, blueprints, confidential computing, Multi‑Instance GPU (MIG), and virtual GPU (vGPU) technologies with HPE chip‑to‑cloud security and AI‑driven automation through HPE Compute Ops Management.
HPE AI Grid
HPE is also introducing an AI Grid, an end-to-end offering built on an Nvidia reference architecture to connect AI factories and distributed inference clusters across regional and far‑edge sites. It enables service providers to deploy and operate thousands of distributed inference sites, turning AI installations into a single intelligent system.
The HPE AI Grid includes:
- Juniper's telco-grade multicloud routing and coherent optics for predictable long-haul and metro connectivity; cloud-native and multi-tenant security; firewalls; WAN automation; and orchestration to deliver zero-touch deployment and lifecycle operations.
- ProLiant Compute edge and rack servers with Nvidia-accelerated computing, including RTX PRO 6000 Blackwell GPUs, as well as BlueField DPUs, Spectrum-X Ethernet switches, Connect-X SuperNICs, and AI blueprints for rapid AI inference.
HPE says AI Grid lets service providers convert existing sites with power and connectivity into RAN‑ready AI grids, enabling distributed inference and new services at scale.
Availability
HPE support for RTX PRO 4500 Blackwell Server Edition GPUs across the ProLiant Compute server portfolio will roll out in Q1 and Q2 2026.
HPE Private Cloud AI with air-gapped deployment, support for RTX PRO 600 Blackwell Server Edition GPUs across each configuration, and AI-Q and Omniverse blueprints is available now.
The new network expansion racks for HPE Private Cloud AI for scaling up to 128 GPUs will be available in July.
The HPE and Protopia secure blueprint for trustworthy AI factories is planned for Q2 2026.
Fortanix support with ProLiant DL380a Gen12 systems is planned for Q3 2026.
Bootnote
Nvidia's cuVS is a library for vector search and clustering on the GPU. CAGRA is a graph-based nearest neighbor algorithm that was built from the ground up for GPU acceleration. CAGRA demonstrates state-of-the-art index build and query performance for both small- and large-batch sized search.