Open systems storage marriage possibilities and problems
There is storage industry speculation that IBM intends to do more with its NetApp-sourced N series than simply have it run alongside its DSx000 storage line, like transition completely. Could it do that?
IBM is late in bring technologies such as thin provisioning to its storage arrays. Indeed, from the open systems server and storage perspective it may look as if nothing much is happening with IBM’s own disk storage. As IBM is reselling all of NetApp’s storage products, a 1+1=3 back-of-the-envelope conspiracy theory says IBM is gearing up to save lots of R&D and engineering dollars by transitioning over to NetApp products lock, stock and barrel.
It’s crudely similar to a PCs-to-Lenovo type scenario but there is a huge problem with this. No NetApp system can integrate with IBM mainframes like the monolithic DS8000 can. NetApp makes open systems server storage, not mainframe storage. IBM can not abandon the DS8000 or transition its users to NetApp products. For them it would be like exchanging a customised Ferrari for a moped.
For example, IBM has very recently enhanced this mainframe-attach drive array to better support z10 and z/OS mainframe environments by having its microcode extended to work with or improve z/OS facilities. These enhancements improve certain storage facilities for the mainframe but are not generally extensible to the open systems (Window, Unix/Linux) server/storage world because mainframes do unique things.
The recent z10 announcement included new extended distance FICON. A new protocol is being implemented to increase the number of packets in flight across long distance FICON links, one of 50km or more. Currently link spoofing or channel extension equipment is needed. With the new protocol there is no need for that and functions like z/OS Global Mirror will work faster across such distances. This involves a mainframe’s updates to its local DS8000 being retained in cache. A z/OS middleware function called System Data Mover picks these up and sends them to a second DS8000 across the extended FICON link asynchronously.
In a sense this is a mainframe version of TCP/IP acceleration in wide area file services products, such as those from Riverbed and Cisco. It requires the DS8000 to be aware of the extended FICON protocol and its microcode is being extended to do this. Other arrays that attach by FICON will know nothing about it. There are several other ways in which the DS8000′s functionality is integrated with and specific to mainframe ways of working.
Because of this you can’t simply stick a FICON connector on the top-end N series it and expect mainframe customers to fall over NetApp in gratitude. The idea is simply risible.
A NetApp transition may be a conceivable one for IBM’s open systems storage. The unified NetApp environment is a big plus for IBM there and gives it immediate feature upgrades it would take a lot to implement across its DS3000 and DS4000, and also the modular DS6000 which is pitched at enterprise open system server environments as well as mainframe ones. But there is a problem area here; IBM’s SAN Volume Controller.
The SVC is an appliance-like specialised server that is located inside a Fibre Channel SAN fabric and linked to a SAN director. It virtualises the disk storage behind the director. As such it competes with director-located virtualization products like InVista from EMC, and with edge-of-fabric, enhanced array controller virtualization like HDS’ Universal Storage Processor. Broadly speaking they do pretty much the same thing apart from the SVC not having a file serving or virtualising function as the HDS product does.
The problem is that NetApp has an equivalent heterogeneous storage virtualizing product to the HDS one, its V-Series. It works to unify Fibre Channel and IP SAN storage and file storage. To that extent it is more capable than IBM’s SVC. But the SVC has proved really very popular with thousands of copies installed, more than the HDS product and many, many times more than InVista.
Were IBM to consider changing from supplying its own DSx000 open systems storage to its NetApp-sourced N series line then it would need to resolve the competing attractions of the SVC and the V-Series. The internal loyalty to the SVC inside IBM is strong; it being a product, the product in fact, possibly the only storage product, which has decisively outsold the equivalent one from that bane of IBM’s storage people, EMC. To give that up for the relatively new V-Series might cause quite a few IBM throats to choke.
There are other software storage product problems too with a DSx000 transition to NetApp, ones to do with IBM storage software products with partial overlap with NetApp Data ONTAP functions. But these seem less important than the SVC-V-Series tension.
Why has IBM taken on NetApp’s products though, if not to examine a possible transition? Simply by taking the products on it has implied that NetApp storage offers attractions to customers that the DSx000 line does not. There is no evidence of an IBM development effort to fix that.
In fact we could say that NetApp has done a better job of building storage array products using third-party array hardware suppliers, than IBM has done. So why bother to reinvent a wheel that NetApp has invented already? IBM has a history of progressively distancing itself from the hardware manufacture of its products and then, once manufacturing has been outsourced, from the detailed product development work. This history includes drive arrays (Xyratex), printers (Lexmark), and PCs (Lenovo).
Conceivably, if the SVC-V-Series problem could be solved, IBM could switch over from the LSI Logic-sourced DS3000 and DS4000s to NetApp products. It designed the DS6000 and DS8000 in-house and the possibility might be that these would remain as its mainframe-centric storage with the open systems storage given over to NetApp via the N series.
However, there is another cloudy area; XIV. IBM bought this grid-like, cloud storage supplier in a surprise move earlier this year. NetApp has a cluster storage capability with the Spinnaker technology in its ONTAP GX product. It isn’t ready for prime time yet and should be integrated into mainstream ONTAP eventually. How would an XIV array technology function in IBM’s storage line-up alongside the N series? Small XIV grids would compete with NetApp clusters.
One back-of-the-envelope calculation is that the N series is a stop-gap and XIV technology is seen by IBM as its non-mainframe DS4000/3000 replacement. It’s equally possible that there is more than one school of storage thought inside IBM and that the company itself hasn’t decided where it’s going. It certainly isn’t telling anyone in the media.
There are three takeaways from this discussion. First, IBM cannot substitute either NetApp or XIV products for the DS8000 without a massive amount of work. Second, the OEM deal with NetApp indicates that IBM does not believe it can, or does not want to, work with its OEM suppliers to overhaul the DS4000/3000 products and needs NetApp to compensate for those product’s deficiencies.
Third, and last, the XIV purchase indicates that IBM is not ready to cede its non-mainframe storage lock, stock and barrel, to NetApp. It doesn’t think NetApp has a viable Web 2.0, cloud storage product strategy that it can use and wants its own stake in that game.