Data Management

Storage news ticker – 16 January 2026

Published

There will be February 2026 Deposit Ceremony at the Arctic World Archive in Svalbard. AWA says “This ceremony is a rare opportunity to access the vault inside the mountain, and this edition will be particularly notable with a special UNESCO deposit taking place during the World Legacy Summit on February 26. More information here.

Startup Clickhouse has raised $400 million in a D-round led by Dragoneer Investment Group, with participation from Bessemer Venture Partners, GIC, Index Ventures, Khosla Ventures, Lightspeed Venture Partners, accounts advised by T. Rowe Price Associates, Inc., and WCM Investment Management. Clickhouse has a very fast, open-source online analytical processing (OLAP) columnar database with SQL query access. Total funding is $1.05 billion.

ClickHouse Cloud, capable of processing billions of events per second, launched as a fully-managed cloud service in December 2022, building on open source real-time analytics technology used in production at companies such as eBay, Uber, and Disney. The company now serves more than 3,000 customers on its fully-managed service, ClickHouse Cloud, with ARR growing more than 250 percent year over year. Over the past three months, customers including Capital One, Lovable, Decagon, Polymarket, and Airwallex have adopted the platform or expanded existing deployments. These customers join an established base that includes AI innovators and major brands like Meta, Cursor, Sony, and Tesla.

ClickHouse has acquired Langfuse, the open-source LLM observability platform. Unlike trad observability, which focuses on system health and performance metrics, LLM observability focuses on ensuring that non-deterministic and increasingly complex AI systems produce outputs that are accurate, safe, and aligned with user intent. As AI systems become increasingly embedded in production workflows, LLM observability has emerged as a critical requirement for teams building and operating AI-powered applications. The Langfuse open source project has seen rapid adoption, ending 2025 at over 20K GitHub stars and more than 26M+ SDK installs per month.

ClickHouse announced an enterprise-grade Postgres service deeply integrated with ClickHouse. To power modern, real-time AI applications that require both transactional and analytical capabilities, ClickHouse is delivering a unified data stack that includes high-performance, scalable Postgres backed by NVMe storage and with native CDC capabilities. It says that, in just a few clicks, users can sync transactional data to ClickHouse, unlocking up to 100X faster analytics.

……

Connectivity supplier Cloudflare has acquired the team behind Astro, the open-source web framework used by brands like Unilever, Visa and NBC News to build fast, content-driven websites. The move cements Cloudflare’s commitment to Astro remaining open source, while accelerating development of one of the web’s fastest frameworks — designed to ship only the code needed to load a page, boosting speed, SEO and performance. The news follows the beta release of Astro 6, bringing broader JavaScript runtime support and faster build times. Read more in a blog.

Daniel Esposito.

Data migrator and manager Datadobi has promoted Daniel Esposito to VP of Global Alliances. The firm says “Esposito will build a global network of service partners capable of delivering comprehensive unstructured data management solutions—addressing a rapidly emerging landscape that bridges traditional storage vendors and cloud providers.” Michael Jack, CRO, at Datadobi, said: “The opportunity is clear: enterprises need data management capabilities that neither storage vendors nor hyperscalers fully address. But this market won’t build itself. It requires service partners with deep expertise who can design solutions, not just move boxes. Daniel’s promotion reflects our commitment to finding and empowering those strategic partners—building an ecosystem where none existed before.”

Dell has a response to the rising SSD shortage. A blog by Director, Primary Storage & ISG Portfolio Messaging Brian Henderson ays: “Enterprises everywhere are scrambling to prepare for a storage cost super-cycle they didn’t see coming. …At Dell, flash isn’t treated as a commodity. It’s treated as an efficiency engine, and at the heart of that engine is data reduction.” Dell’s PowerStore and PowerMax deliver 5:1 data reduction = fewer drives, lower costs. ”When a terabyte of SSD costs significantly more, reducing your need for additional capacity becomes your strongest defense. PowerStore and PowerMax make that possible.”

FalconStor Software announced Habanero, a globally-available.fully managed, SaaS offering to simplify secure offsite data protection for IBM Power customers. Integrating directly with existing IBM Power workloads, via Power Virtual Server, backup applications, and established operational processes, Habanero enables customers to establish enterprise-grade offsite protection without deploying new infrastructure or changing how backups are run today. Delivered as a fully managed service, Habanero provides secure offsite retention, disaster recovery copies, and long-term archives, using OBM Cloud Object Storage, with simple, predictable pricing aligned to object-storage economics. FalconStor supplies and operates all underlying infrastructure, both on-premises and in the cloud, allowing customers to consume offsite data protection as a service while avoiding operational complexity.

Habanero is available through the IBM Cloud Catalog and is designed for partner-led delivery. Learn more here.

Hammerspace has sponsored a Neuralytix white paper, “AI Anywhere … Where is Anywhere?” discussing how distributed data sources can be aggregated to provide an anti-data fragmentation capability. Hammerspace says: “Fast storage alone isn’t enough. Even the most advanced flash arrays cannot overcome the architectural limitations of fragmented data. The real competitive advantage lies in the ability to unify massive volumes of unstructured data into a single, global environment capable of continuously powering inference and generative AI.” Get the white paper here.

Hammerspace has its own slant on the SSD (and HDD) supply constraints, saying you should think less about buying new devices and mote about how you use your storage media estate. It says 4 architectural points stand out:

1) Use SSD or HDD capacity you already own for additional applications or users — without moving the device or data that already exists.
Rather than migrating data or repurposing devices, customers are logically aggregating existing capacity across NAS, flash servers, and object storage using data assimilation into our global namespace and parallel global file system. Available capacity across multiple systems is aggregated and available and existing data stays put. This immediately unlocks stranded capacity.

In this case, instead of even considering shuffling the physical SSD from one vendor to another, you simply and elegantly aggregate the available capacity from any storage vendor into one logical point of available capacity that can be immediately available to capacity starved applications or users.

2) Activate Tier-0 (the SSDs inside compute servers) – Modern CPU and GPU clusters typically contain massive NVMe capacity that’s typically siloed per node. By turning that server-local NVMe into available capacity in the global namespace and parallel global file system makes it available as additional shared storage capacity. Organizations reduce dependency on additional external flash while improving performance — using flash they already paid for, right where the GPUs/CPUs are.

3) Treat cloud as a seamless extension, not a destination – Instead of tiering or copying data into separate cloud silos, on-prem and cloud storage are unified into the same global namespace and parallel global file system. Data placement becomes policy-driven, not migration-driven, giving teams flexibility when on-prem capacity is constrained — without changing paths, workflows, or performance assumptions.

4) Go beyond deduplication by eliminating duplicate copies altogether – Deduplication and compression are valuable tools — but they are not the full solution to reducing data storage footprint. In most environments, the bigger problem is proliferation of multiple copies of data: the same datasets copied across multiple storage systems, clusters, and cloud instances that all have their own isolated deduplication (if they have dedupe at all). That problem can only be solved globally, by operating all data within a single namespace and file system. When data is shared logically in a single data platform instead of copied, organizations reduce footprint at the source, not just after the fact — something siloed storage platforms and array-level deduplication simply cannot do.

CMO Molly Presley says a combination of these approaches turns the flash shortage from an immediate crisis into a manageable constraint, reducing its impact on day‑to‑day operations.

Fast NVMe/TCP accessed block storage SW supplier Lightbits announced record growth and expanding customer adoption across financial services, e-commerce, neo-clouds, and cloud service providers in 2025. It says its customers had to “confront the performance limits and rising costs of legacy SAN and HCI architectures that struggle to deliver predictable high-performance and efficiency at scale.” CEO and co-founder Eran Kirzner said: “We delivered a 3X year-over-year increase in software purchases as well as a corresponding rise in new customers. The average deal size increased by 2X and we hit a record with a first-time deployment purchase of greater than 4X capacity. Our growth is a strong signal that the architecture is working, not just in benchmarks, but in real production environments.”

Identity security services supplier Okta has launched in-country Okta Platform tenants in India, delivering data residency and enhanced disaster recovery. It helps enable highly regulated sectors—such as banking, financial services, insurance, and healthcare—to securely adopt AI and strengthen their defenses against advanced cyber threats. Local Okta Platform tenants, hosted on AWS, will help Okta customers address India’s evolving data, security, and compliance challenges.

Reuters reports SK Hynix plans to accelerate the opening of a new fab at Yongin, S Korea, by three months and will also begin deploying silicon wafers next month into a new fab, M15X, in Cheongju, S Korea, to produce HBM chips, in February, a senior executive said, as surging memory demand pressures global supply. The fab in Yongin, 40 km (25 miles) south of Seoul, is part of the company’s planned 600 trillion won ($407 billion) investment in its “Semiconductor Cluster,” which will eventually house four fabs.

Starburst develops and uses Trino open source distributed SQL to query and analyze distributed data sources. An Enterprise Strategy Group (ESG) Economic Validation Study: Economic Benefits of Starburst’s Data & AI Platform shows that organizations using Starburst’s Data & AI Platform can achieve a 45% lower TCO over 3 years compared with alternative data platforms and specialized tools. The analysis, based on customer interviews and ESG’s independent economic modeling, found that organizations adopting Starburst significantly reduced infrastructure complexity, accelerated analytics and AI initiatives, and improved operational efficiency while scaling data access across distributed environments. ESG modeled a data-driven SaaS organization with $210 million in annual revenue and found that Starburst delivered a three-year return on investment (ROI) of 414%, driven by cost savings, improved productivity, avoided downtime, and faster time to insight. Get the report here.