AI/ML

MinIO rolls out exabyte-class ExaPOD to keep AI GPUs busy

Published

MinIO is combining its AIStor object software with Supermicro Intel Xeon 6 servers and Solidigm SSDs to craft 1 EiB (1.13 EB) rack-scale on-prem building blocks to scale AI data storage to the zettabyte level.

This extends MinIO’s DataPOD object storage reference architecture (RA), introduced in August 2024, which was a scalable 100 PiB (112.6 PB) modular building block aimed at feeding data fast to Nvidia GPU servers. AIStor was announced a year ago, extending MinIO’s Enterprise Object Store software with the S3 API, PromptObject, support for S3 over RDMA, AIHub private Hugging Face repository, and an updated global console with a Kubernetes operator, to provide fast and vast object storage for AI training and inferencing. The ExaPOD RA takes this further with exabyte-level capacity support.

AB Periasamy

Co-founder and co-CEO AB Periasamy said: “AI is not simply about adopting the latest model or GPU, but about re-architecting how data is stored, moved, and made accessible at scale.

“The winners in the AI era will be defined by their ability to efficiently deliver data at exascale performance with hyperscaler economics. ExaPOD makes that possible, providing a simple, modular architecture for enterprises to build their own AI infrastructure on their terms, with complete control and no compromises.”

MinIO says AI training and inferencing GPUs can sit idle because data doesn’t arrive to the GPU fast enough. ExaPOD will keep them busy by “reducing and stabilizing latency at exascale, ensuring a consistent, high-throughput data path that keeps AI workloads continuously fed and operational.”

Compared to US-owned public cloud object stores, it provides predictable total cost of ownership (TCO), no egress charges or cloud lock-in, plus sovereign deployment capability. The suggested TCO is $4.55 to $4.60 per usable TiB per month, but your mileage will vary.

An ExaPOD 48U rack has 36 PiB (40.5 PB) all-flash usable capacity and uses up to 900 W per usable capacity PiB, meaning 32,400 W per ExaPOD rack. It is fitted with 400 GbitE network links and uses Supermicro SYS-212-TN 2RU servers with 24 x NVMe drive slots and the Xeon 6781P (80 cores, 136 PCIe Gen 5 lanes) CPU. The SSDs are 122.88 TB PCIe gen 5 interface QLC flash NVMe drives, with full parallelism across a server’s 24 SSDs, and the rack can have optional liquid cooling.

There is 19.2 TBps aggregate throughput at 1 EiB, when configured with 32 racks, 640 servers per rack, and 122.88 TB SSDs. Lower capacity SSDs are also supported. It delivers linear performance scaling and consistent time to first byte (TTTB), according to MinIO. We’re told the ExaPOD natively supports generative AI, vector databases, and edge computing, and supports AI-driven observability.

In MinIO’s view. ExaPOD brings hyperscale unit economics to the on-premises storage world.

MinIO will showcase ExaPOD at SuperComputing 2025 in St. Louis, November 16-21, booth #6513. Read some ExaPOD background in a blog and a white paper.

Bootnote

There are 1,048,576 TiB (tebibytes) in 1 EiB (exbibyte), which means the TCO for a 1 EiB ExaPOD will be $4,771,020.80/month to $4,823,449.30/month.