AI/ML

OFP’s data server killers aiming for AI system scalability and efficiency nirvana

Published

The Open Flash Platform (OFP) group want to kill data servers and enable massive scale AI hardware/software systems with ten times the density of existing file-based AI storage, and using a tenth of the power.

The OFP was founded eight months ago, in July 2025, with the intent of eliminating the storage data server. It wants to replace all-flash storage arrays with disaggregated, DPU-controlled, shelves of directly-accessed new style SSDs, using pNFS to provide low latency, high bandwidth and scalable parallel data access.

It envisages a 1 EB rack having 10 times or more density than, and using 90 percent less power than, 1 EB of storage array capacity. It will have a 60 percent longer operating life and equivalent fall in total cost of ownership than today’s storage arrays at the same capacity level. The storage server is collapsed to a smartNIC (DPU) running Linux and pNFS, with data-accessing clients getting direct access to SSDs.

Note, OFP is a file-centric concept and not an object-centric organization. It could, we imagine, provide a pNFS-based storage layer underneath an object storage abstraction (See bootnote).

OFP is a now an eight-member group comprising Hammerspace (pNFS-based SW), the Linux community, Los Alamos National Laboratory (LANL), Phison (controllers), ScaleFlux (computational storage), SK Hynix (NAND chips and SSDs), Solidigm (SSDs), and Xsight Labs (DPU). It is working with Meta on having its design concept included in Meta’s Open Compute Project (OCP).

The OFP roadmap started with a feasibility study in the first half of 2025, and is now in a prototyping phase, with a validation and early-access stage in planning this quarter. General availability is set for the second half of this year.

Prototype diagrams show a 4 PB capacity OFP sled in an E2e NVMe format, that is 31-inches (787.4mm) long , 1.75-inches (44.45mm) high, and 3.5-inches (88.9mm) wide. The EDSFF E2 standard format is 7.9-inches (200mm) long by 0.4 inches (9.5mm) high by 3-inches (76) wide.

OFP sled diagram.

An OFP 1 RU tray would contain up to six of these (24 PB) side-by-side and a rack would hold 42 trays, 252 sleds, providing the 1 EB capacity. It would have an overall 200 Tbps bandwidth, 25 kW idle power and so support 40 PB per kW.

OFP tray diagram.

A Hammerspace OFP slide shows a 50.2 PB scale-out filer system in a single rack, with 19 metadata servers, 19 storage servers, and storage network switches, taking up 40 RU and drawing 29.1 kW of power. This is contrasted with a 48 PB OFP setup needing just 2 RU and 1.2 kW of electricity, freeing up rackspace for GPUs. The comparison is extreme.

OFP tray and sleds in prototype form.

OFP envisages an NFS-eSSD concept with a drive, a system-on-chip (SoC) containing an Arm CPU, a 100 Gbe NIC and flash subsystem, becoming a Linux storage server, with the Linux software running NFS and pNFS in its kernel; i.e. device-resident file services. Such a drive is a network end-point and not a PCIe peripheral. Its controller is now a full-blown storage server and needs to be aware of files and layouts, access patterns and network scheduling. It has to be responsible for data placement, garbage collection, wear levelling and IO scheduling.

Fleets of these storage server drives need to be managed and secured, secured that is, inside the drive’s silicon.

We note that SSD controller supplier Phison is an OFP member, as are SSD suppliers SK Hynix and its subsidiary Solidigm. So too is computational storage supplier ScaleFlux. That adds up to a lot of SSD controller expertise.

Hammerspace says that the outcome of replacing traditional file servers with OFP drives, sleds and racks is an end to scale-out NAS bottlenecks with throttled latency, complex networking and limited scalability. Its messaging seems appropriate to hyperscalers who could benefit from the costa and complexity reductions inherent in the OFP design.

OFP Hammerspace graphic.

We move to a situation of much more effective effective use of hardware and greatly reduced network complexity (fewer hops and switches);

How would Hammerspace founder and CEO David Flynn’s StreamFast initiative fit into this OFP picture? That will require a follow-up article.

Bootnote

It is conceivable that object storage, such as Cloudian’s HyperStore or MiniO’s AIStor could run on OFP DPU hardware, and provide similar benefits to the OFP in an object environment. AISTor already supports Nvidia’s BlueField DPUs and has AI-specific features like Nvidia integrations for GPU-direct I/O. Cloudian also supports BlueField DPUs.