Flash

HPE building Alletra MP-based data fabric

Published

HPE is setting up distributed edge to core data center Data Fabric using its disaggregated Alletra MP X10000 storage system.

This is part of an overall Nvidia-based AI Factory and Smart Cities initiative with a second generation of its Private Cloud AI offering. There is an expanded Nvidia GPU-based computing portfolio featuring new ProLiant XD685 and NVL72 server systems. An HPE Agentic Smart City Solution has been deployed by the town of Vail in Colorado to help cope with seasonal tourist population increases and there is a public sector partnership between HPE and Nvidia. HPE says its AI Factory approach provides centralized workflows, re-usable assets, a unified environment with speedy and predictable deployments, secure governance, and control. The Data Fabric was formerly known as HPE’s Ezmeral Data Fabric.

It is now announcing a unifying distributed Data Fabric, bridging core and edge data centers, co-location sites and the public cloud – AWS, Azure, GCP – with a global namespace, universal access, multi-protocol support, automated tiering, caching, and mirroring.

HPE’s Gokul Sathiacama, VP of Data Storage for AI, told a briefing: “Data is not consistent, is in silos and with, the unified data layer and with data fabric software, we are bringing together multiple sources of data under a single federated namespace.”

“The data layer consists of two different products that compliment each other. One is the Alletra Storage MP X10000 that provides data storage capabilities, in conjunction with the Data Fabric software, which provides the data management capabilities.”

We’re told this software supports S3 over RDMA, providing fast Alletra Storage MP X10000 object data transfer to GPU server system and GPU (HBM) memory. Sathiacama said it’s: “reducing the latency by up to 80 percent, and then reducing the CPU utilisation by up to 99 percent. This means more utilisation of your compute infrastructure and more processing power for the storage solution.”

HPE actually announced S3 for RDMA in August as part of its X10000 v2 release. This was in partnership with Nvidia, which is making its own SDKs generally available. HPE is now able to go to customers and provide an end-to-end system.

The Data Fabric supports the Model Context Protocol (MCP) with agentic AI governance so that AI agents are kept in check.

Sathiacama said: “Once you have those data sources under one roof, now you are able to provide more heuristics into who is accessing the data, what applications are using the data, and then ensuring that the right people have the right access to the data, because data is very sensitive, especially when it comes to AI.”

He added this: “We are announcing that our cloud model through GreenLake cloud platform for 10000 now is coming on-premises. So customers who have the need to have air gap deployments are able to do so in their own data centres as opposed to accessing and managing their systems through our cloud platform.”

The Private Cloud AI Gen 2 is also now available in a smaller form-factor with Nvidia’s RTX 6000 Pro. This has, HPE says, three times better price-performance then the previous generation. The Gen 2 software also has a digital avatar if you want to try a different kind of customer interaction.

The Vail Agentic Smart City deal uses HPE’s Private cloud AI and the RTX 6000 and it, HPE says improves public services with 508 accessibility, safety, housing and a digital concierge, while complying with state and city privacy mandates. It is, the company claims, repeatable across multiple municipalities.

HPE has introduced new Nvidia-based server hardware;

  • ProLiant XD695 with Nvidia B300 DLC for AI training workloads,
  • Nividia rackscale GB300 NVL72 by HPE, with 72 Blackwell Ultra GPUs tied together by NVLink
  • ProLiant DL380 Gen 12 with RTX PRO 6000 and Azure Local premier, built for boosting AI graphics performance on the small-medium scale.

The Nvidia-HPE pubic sector partnership is installing sovereign AI factories in Utah, building supercomputers such as ORNL’s Discovery and Lux, getting involved in quantum computing research, and there are, HPE says, multiple upcoming public sector collaborations with Nvidia.

HPE’s Robin Braun, VP, AI Business Development, Hybrid Cloud, said in a briefing that: “IDC recently assessed private AI infrastructure systems vendors and ranked HPE highest in capability and furthest right in strategies. This was ahead of Dell and Supermicro as well as IBM, Cisco and Oracle.”

Comment

HPE’s Data Fabric invites comparison with similar offerings from NetApp, Pure Storage and Qumulo, also Hammerspace, with NetApp being the pioneer. These are each more mature with multiple public cloud support. They were initially developed before the Gen AI period, in hybrid cloud times, when data was made available in a global namespace covering the on-premises and public cloud environments.

Gen AI has made the data fabric concept more important as AI processing can be carried out in the cloud, and often is, as well as on-premises. That makes having universal data availability in your IT infrastructure as a matter of course, rather than setting up your own data moving processes, a no-brainer. We expect HPE to develop its data fabric aggressively, adding various AI data services, as it competes with the other data fabric suppliers.