Xinnor's alternative software RAID filer for AI
Software RAID supplier Xinnor has designed an all-flash NAS filer for AI around its own RAID stack, the XFS file system and an RDMA data pump.
Its xiNAS product is designed for AI, HPC, and other data-intensive workloads, and delivers all-flash, scale-out performance using standard NFS semantics, without proprietary clients or specialized hardware. On this basis it competes with every other all-flash filer transporting data to and from GPU servers, such as DDN, Dell, HPE, Hitachi Vantara, NetApp, PureStorage, WEKA, and VAST Data. The xiNAS system uses Xinnor’s xiRAID software, an optimized XFS filesystem, and NFS over RDMA.
Xinnor CEO Dmitry Livshits said: “xiNAS is designed to make NFS a performance enabler rather than a constraint for AI and HPC. By tightly integrating xiRAID with an optimized filesystem and RDMA-enabled NFS, we deliver shared storage that keeps GPUs fed, scales linearly, and continues to perform even under real-world fault conditions.”
Xinnor says xiNAS is optimized for multi-client, multi-server deployments, and scales linearly as nodes are added. The xiRAID provides data protection and NFS over RDMA delivers high performance using either RoCE or InfiniBand.
Tests with Supermicro used a single AS-1116CS-TN server, fitted with AMD EPYC 9004-series processors, Nvidia BlueField-3 DPUs, and 12 x PCIe Gen5 NVMe SSDs. This demonstrated up to 74.5 GBps read and 39.5 GBps write throughput. Xinnor says “backend testing showed 97–100 percent efficiency of theoretical NVMe performance with minimal CPU overhead, preserving headroom for high-speed networking.”
The random read and write IOPS numbers were:
- Read - 990,000 IOPS with ~265 µs latency
- Write - 587k IOPS with ~430 µs latency
Xinnor asserts that: “This indicates the architecture is not only strong at streaming bandwidth, but also capable of servicing high-operation-rate workloads often seen in AI pipelines (many small files), build farms, and mixed analytics environments.”
A 2-node deployment delivered 117 GBps sequential read and 79.6 GBps sequential write bandwidth. Xinnor says: “Write throughput demonstrated near-linear scaling as nodes were added, confirming readiness for larger clustered deployments where aggregate bandwidth grows with server count.”
Read performance dropped about 8.5 percent to 107 GBps when one SSD failed, with writes running at 81 GBps. When a drive rebuild process was underway, reads declined to 102 GBps with writes slowing to 75 GBps. The company says this “shows that read-intensive workloads (common in AI training and fine-tuning) experience minimal disruption during a drive failure and rebuild, while write performance is barely affected.”
Read more about the validation of this Xinnor Supermicro system here and XFS here.
The xiNAS product is available through Supermicro and its partners and delivered as a complete software and hardware combination.
Bootnote
XFS is 64-bit journaling file system originally developed by Silicon Graphics in the 1993–1994 period for its IRIX operating system. SGI donated XFS to the Linux kernel in 1999. It is now the default filesystem for Red Hat Enterprise Linux.