Data Center Trends: Why You Need a Ceph Storage Cluster in the Data Center, on the Cloud and at the Edge
By 2025, IT analyst Garter predicts that 75% of all data will be produced at the edge or places other than on-premises, citing the rise of artificial intelligence (AI) and 5G as accelerating the trend. To run the critical containerized applications, or AI and data analytics workloads required to support geographically distributed users, data center administrators need high-performing and scalable open architecture solutions at regional and worldwide locations.
As the amount of data created outside the data center continues to grow, more compute and storage capabilities will be needed closer to where the data is created and applications are accessed, such as a microsite at the base of a 5G tower or manufacturing floor, or media streaming server.
Researching Ceph storage cluster solutions to achieve new levels of performance and efficiency? Here’s what data center administrators need to know to evaluate OSNEXUS object storage solutions. Questions? Drop us an email at or book a meeting with a sales advisor.
💦 Ceph – An Open Solution to Data Deluge
To keep pace with exponential data growth – on-premises, on public and private clouds, and at the edge – more enterprise and mid-sized data centers are migrating to Ceph to address increasingly distributed environments with open source instead of expensive proprietary options. With a production-proven architecture, Ceph was initially aimed at hyperscaler and HPC environments. However, in recent years its advanced storage technology has appealed to any organization that would benefit from a unified cluster system that supports high-growth file and block storage, S3 object stores and data lakes.
Due to its federated data management and modular architecture, Ceph scales horizontally exceptionally well, in both performance-built and unbound-capacity infrastructure requirements. As the number of nodes in the cluster increases, Ceph performance improves with the ability to scale into the tens of petabytes and exabyte levels – ideal for multi-cluster sites with asynchronous replication and disaster recovery requirements.
Also Ceph: Management Complexities
Building a massive Ceph storage cluster infrastructure takes a high-level of IT expertise – skills that only hyperscalers, HPC or Tier 1 service providers tend to posses in-house. Additionally, as object storage demands continue to grow, data center operators will want to leverage Ceph’s hyperscale capabilities to employ hybrid cloud strategies to reduce public cloud costs. Unfortunately, many enterprises and mid-sized organizations don’t have the experienced IT professionals on staff to deploy, manage and support the complex Ceph environments necessary to run AI, analytics, and containerized workloads.
⚡ Introducing OSNEXUS QuantaStor
For organizations without hyperscaler expertise, OSNEXUS has designed the QuantaStor software-defined storage (SDS) platform based on standard Ceph (Naultilus) and CentOS Linux distribution to allow everyday data center users the ability to set-up, manage and maintain large-scale, multi-site Ceph storage cluster environments.
No in-house Ceph knowledge, no problem!
With little or no knowledge of Ceph utilities or architecture, QuantaStor users can create storage pools for file, block, and object storage in minutes. QuantaStor enables a quick setup and easy configuration of a cluster node. In addition, QuantaStor’s powerful web-based configuration, monitoring and management features make it easy to setup large and complex configurations without using CLI and console tools.
QuantaStor integrates with and extends Ceph and ZFS storage technologies to deliver an elastic and highly-available SDS platform that can deploy, manage and expand file (NFS/SMB/CephFS), block (iSCSI/FC/CephRBD) and Amazon S3-compatible object storage via REST based protocols.
The secret sauce? OSNEXUS Storage Grid
In addition to deep integration with Ceph, OSNEXUS built-in storage grid technology unifies the management of storage nodes and clusters across public and private clouds. The QuantaStor storage grid manages the underlying Ceph technology end-to-end, automating best-practices within the platform, to ensure that deployments are set-up correctly without the need for Ceph-deploy or one-way ansible configuration tools.
From a small three system configuration to a hyper-scale, multi-cluster environment, QuantaStor delivers the comprehensive storage capabilities that are critical for organizations seeking to expand distributed environments.
⚡ Ceph Solutions powered by QuantaStor
To accelerate the adoption of Ceph, QuantaStor is designed to install on bare-metal or virtual machine environments. Quickly deploy highly-scalable, high-performance Ceph cluster solutions and expansion nodes on industry-standard hardware within minutes! Powered by QuantaStor, the task-optimized Pogo Linux StorageDirector appliance delivers a turn-key unified cluster management solution that’s optimized for Ceph and features deep hardware integration with Intel or AMD processors and Western Digital NVMe SSDs and NVMe-oF technologies.
High-Available with Erasure-Coding & Replication
The StorageDirector can be configured for high-availability using three (or more) QuantaStor appliances in a storage layout that’s designed to support erasure-coding. While Ceph replicates data to make it fault-tolerant for migration purposes, QuantaStor’s integrated hardware management actively-monitors key Pogo Linux hardware elements, including HDD and flash storage health, power supply, fans and thermal temperature.
In our next article, we’ll explain how a single QuantaStor cluster deployment supports both scale-up file and block storage (SAN/NAS) focused on capacity, and scale-out object storage (S3) focused on performance and unbound capacity.
Choosing an Ceph Storage Cluster Management Solution
Low latency or high transfer rates are of little benefit if they swamp the target application. While these systems generate IOPS approaching the millions, the reality is that there are very few workloads that require more than the performance of these systems. However, there is an emerging class of workloads that can take advantage of all the performance and low latency of an end-to-end NVMe system.
If you’d like to learn more about how to maximize your data center infrastructure by up to 90% while minimizing its footprint, give us a call at (888) 828-7646, email us at or book a time calendar to speak. We’ve helped organizations of all sizes deploy composable solutions for just about every IT budget.