Proxmox

Mastering Ceph Storage Configuration in Proxmox 8 Cluster

Dive into the power of Ceph storage configurations in a Proxmox cluster, the benefits, redundancy, and versatility of Ceph shared storage.

Quick Summary

  • Understanding the Ceph Storage ClusterConfiguring the Proxmox and Ceph IntegrationInstalling and Configuring CephSetting up Ceph OSD Daemons and Ceph MonitorsCreating Ceph MonitorsCreating a Ceph Pool for VM and Container storageUtilizing Ceph storage for Virtual Machines and ContainersManaging Data with CephCeph Object StorageBlock Storage with CephCeph File SystemCeph Storage Cluster and Proxmox.
  • Ceph Storage is an open-source, highly scalable storage solution designed to accommodate object storage devices, block devices, and file storage within the same cluster.
  • In essence, Ceph is a unified system that aims to simplify the storage and management of large data volumes.

The need for highly scalable storage solutions that are fault-tolerant and offer a unified system is undeniably significant in data storage. One such solution is Ceph Storage, a powerful and flexible storage system that facilitates data replication and provides data redundancy. In conjunction with Proxmox, an open-source virtualization management platform, it can help manage important business data with great efficiency. Ceph Storage is an excellent storage platform because it’s designed to run on commodity hardware, providing an enterprise-level deployment experience that’s both cost-effective and highly reliable. Let’s look at mastering Ceph Storage configuration in Proxmox 8 Cluster.

What is Ceph Storage?

Ceph Storage is an open-source, highly scalable storage solution designed to accommodate object storage devices, block devices, and file storage within the same cluster. In essence, Ceph is a unified system that aims to simplify the storage and management of large data volumes.

Ceph Storage is an open source and robust storage solution
Ceph Storage is an open source and robust storage solution

A Ceph storage cluster consists of several different types of daemons: Ceph OSD Daemons (OSD stands for Object Storage Daemon), Ceph Monitors, Ceph MDS (Metadata Server or metadata server cluster), and others. Each daemon type plays a distinct role in the operation of the storage system.

Ceph OSD Daemons handle data storage and replication, storing the data across different devices in the cluster. The Ceph Monitors, on the other hand, track the cluster state, maintaining a map of the entire system, including all the data and daemons.

Ceph MDS, or metadata servers, are specific to the Ceph File System. They store metadata for the filesystem, which allows the Ceph OSD Daemons to concentrate solely on data management.

A key characteristic of Ceph storage is its intelligent data placement method. An algorithm called CRUSH (Controlled Replication Under Scalable Hashing) decides where to store and how to retrieve data, avoiding any single point of failure and effectively providing fault-tolerant storage.

What is a Proxmox Cluster and Why is Shared Storage Needed?

A Proxmox cluster is a group of Proxmox VE servers working together. These servers, known as nodes, share resources and operate as a single system. Clustering allows for central management of these servers, making it easier to manage resources and distribute workloads across multiple nodes.

Below, I have created a new Proxmox 8 cluster of three nodes.

Proxmox 8 cluster
Proxmox 8 cluster

Shared storage systems

Shared storage is essential in a Proxmox cluster for several reasons. Firstly, it enables high availability. If one node fails, the virtual machines (VMs) or containers running on that node can be migrated to another node with minimal downtime. Shared storage ensures the data those VMs or containers need is readily available on all nodes.

Secondly, shared storage facilitates load balancing. You can easily move VMs or containers from one node to another, distributing the workload evenly across the cluster. This movement enhances performance, as no single node becomes a bottleneck.

Lastly, shared storage makes data backup and recovery more manageable. With all data centrally stored, it’s easier to implement backup strategies and recover data in case of a failure. In this context, Ceph, with its robust data replication and fault tolerance capabilities, becomes an excellent choice for shared storage in a Proxmox cluster.

Why is Ceph Storage a Great Option in Proxmox for Shared Storage?

Proxmox supports several block and object storage solutions. Ceph Storage brings Proxmox a combination of scalability, resilience, and performance that few other storage systems can offer. With its unique ability to simultaneously offer object, block, and file storage, Ceph can meet diverse data needs, making it an excellent choice for shared storage in a Proxmox environment.

One of the key reasons that Ceph is a great option for shared storage in Proxmox is its scalability. As your data grows, Ceph can effortlessly scale out to accommodate the increased data volume. You can add more storage nodes to your cluster at any time, and Ceph will automatically start using them.

Fault tolerance is another reason why Ceph is a great choice. With its inherent data replication and redundancy, you can lose several nodes in your cluster, and your data will still remain accessible and intact. In addition to this, Ceph is designed to recover automatically from failures, meaning that it will strive to replicate data to other nodes if one fails.

Ceph’s integration with Proxmox for shared storage enables virtual machines and containers in the Proxmox environment to leverage the robust Ceph storage system. This integration makes Ceph an even more attractive solution, as it brings its strengths into a virtualized environment, further enhancing Proxmox’s capabilities.

Finally, Ceph’s ability to provide object, block, and file storage simultaneously allows it to handle a wide variety of workloads. This versatility means that whatever your shared storage needs, Ceph in a Proxmox environment is likely to be a solution that can handle it effectively and efficiently.

Understanding the Ceph Storage Cluster

At its core, a Ceph storage cluster consists of several components, each having a specific role in the storage system. These include Ceph OSDs (Object Storage Daemons), which manage data storage, and Ceph Monitors, which maintain the cluster state. The CRUSH algorithm controls data placement, enabling scalable hashing and avoiding any single point of failure in the cluster. Ceph MDS (Metadata Servers) are also part of this structure, which store metadata associated with the Ceph filesystem.

Ceph clients interface with these components to read and write data, providing a robust, fault-tolerant solution for enterprise-level deployments. The data stored in the cluster is automatically replicated to prevent loss, thanks to controlled replication mechanisms.

Ceph Storage Architecture
Ceph Storage Architecture

Configuring the Proxmox and Ceph Integration

Proxmox offers a user-friendly interface for integrating Ceph storage clusters into your existing infrastructure. This integration harnesses the combined power of object, block, and file storage, offering a versatile data storage solution.

Before you begin, ensure that your Proxmox cluster is up and running, and the necessary Ceph packages are installed. It’s essential to note that the configuration process varies depending on the specifics of your existing infrastructure.

Installing and Configuring Ceph

Start by installing the Ceph packages in your Proxmox environment. These packages include essential Ceph components like Ceph OSD daemons, Ceph Monitors (Ceph Mon), and Ceph Managers (Ceph Mgr).

Click on one of your Proxmox nodes, and navigate to Ceph. When you click Ceph, it will prompt you to install Ceph.

Install Ceph on each Proxmox 8 cluster node
Install Ceph on each Proxmox 8 cluster node

This begins the setup wizard. First, you will want to choose your Repository. This is especially important if you don’t have a subscription. You will want to choose the No Subscription option. For production environments, you will want to use the Enterprise repository.

Choosing the Ceph repository and beginning the installation
Choosing the Ceph repository and beginning the installation

You will be asked if you want to continue the installation of Ceph. Type Y to continue.

Verify the installation of Ceph storage modules
Verify the installation of Ceph storage modules
Ceph installed successfully
Ceph installed successfully

Next, you will need to choose the Public Network and the Cluster Network. Here, I don’t have dedicated networks configured since this is a nested installation. So I am just choose the same subnet for each.

Configuring the public and cluster networks
Configuring the public and cluster networks

If you click the Advanced checkbox, you will be able to setup the Number of replicas and Minimum replicas.

Advanced configuration including the number of replicas
Advanced configuration including the number of replicas

At this point, Ceph has been successfully installed on the Proxmox node.

Ceph configured successfully and additional setup steps needed
Ceph configured successfully and additional setup steps needed

Repeat these steps on the remaining cluster nodes in your Proxmox cluster configuration.

Setting up Ceph OSD Daemons and Ceph Monitors

Ceph OSD Daemons and Ceph Monitors are crucial to the operation of your Ceph storage cluster. The OSD daemons handle data storage, retrieval, and replication on the storage devices, while Ceph Monitors maintain the cluster map, tracking active and failed cluster nodes.

You’ll need to assign several Ceph OSDs to handle data storage and maintain the redundancy of your data.

Adding an OSD in Proxmox Ceph storage
Adding an OSD in Proxmox Ceph storage
The OSD begins configuring and adding
The OSD begins configuring and adding
The OSD is successfully added to the Proxmox host
The OSD is successfully added to the Proxmox host

Also, set up more than one Ceph Monitor to ensure high availability and fault tolerance.

OSDs added to all three Proxmox nodes
OSDs added to all three Proxmox nodes

At this point, if we visit the Ceph storage dashboard , we will see the status of the Ceph storage cluster.

Healthy Ceph storage status for the cluster
Healthy Ceph storage status for the cluster

Creating Ceph Monitors

Let’s add additional Ceph Monitors, as we have only configured the first node as a Ceph monitor. Whatis a Ceph Monitor?

A Ceph Monitor, often abbreviated as Ceph Mon, is an essential component in a Ceph storage cluster. Its primary function is to maintain and manage the cluster map, a crucial data structure that keeps track of the entire cluster’s state, including the location of data, the cluster topology, and the status of other daemons in the system.

Ceph Monitors contribute significantly to the cluster’s fault tolerance and reliability. They work in a quorum, meaning there are multiple monitors, and a majority must agree on the cluster’s state. This setup prevents any single point of failure, as even if one monitor goes down, the cluster can continue functioning with the remaining monitors.

By keeping track of the data locations and daemon statuses, Ceph Monitors facilitate efficient data access and help ensure the seamless operation of the cluster. They are also involved in maintaining data consistency across the cluster and managing client authentication and authorization.

Here we are adding the 2nd Proxmox node as a monitor. I added the 3rd one as well.

Adding Ceph Monitors to additional Proxmox hosts
Adding Ceph Monitors to additional Proxmox hosts

Now, each node is a monitor.

All three Proxmox hosts are running the Ceph Monitor
All three Proxmox hosts are running the Ceph Monitor

Creating a Ceph Pool for VM and Container storage

Now that we have the OSDs and Monitors configured, we can create our Ceph Pool. Below we can see the replicas and minimum replicas.

Creating a Ceph Pool
Creating a Ceph Pool

Now, the Ceph Pool is automatically added to the Prommox cluster nodes.

Pool added to all three Proxmox nodes
Pool added to all three Proxmox nodes

Utilizing Ceph storage for Virtual Machines and Containers

Now that we have the Ceph Pool configured, we can use it for backing storage for Proxmox Virtual Machines and Containers. Below, I am creating a new LXC container. Note how we can choose the new Ceph Pool as the container storage.

Choosing the new pool for Proxmox LXC container storage
Choosing the new pool for Proxmox LXC container storage

The LXC container creates successfully with no storage issues which is good.

The new LXC container is created successfully on Ceph storage
The new LXC container is created successfully on Ceph storage

We can see we have the container up and running without issue. Also, I was able to migrate the LXC container to another node without issue.

The LXC container operating on the Ceph Pool
The LXC container operating on the Ceph Pool
Ceph performance dashboard in Proxmox
Ceph performance dashboard in Proxmox

Now, let’s learn a little more about Ceph.

Managing Data with Ceph

Ceph uniquely stores data. It breaks down data into objects before storing them across the cluster. This breakdown of data facilitates scalable storage across multiple storage nodes. It also provides an opportunity to implement erasure coding or redundancy for data protection.

Ceph Object Storage

Object storage in Ceph is done through RADOS (Reliable Autonomic Distributed Object Store). Objects stored are automatically replicated across different storage devices to ensure data availability and fault tolerance. The CRUSH algorithm, a scalable hashing technique, controls how the objects are distributed and accessed, thus avoiding any single point of failure.

Block Storage with Ceph

Ceph Block Devices, or RADOS Block Devices (RBD), is a part of the Ceph storage system that allows Ceph to interact with block storage. These block devices can be virtualized, providing a valuable storage solution for virtual machines in the Proxmox environment. Block storage with Ceph offers features like thin provisioning and cache tiering, further enhancing data storage efficiency.

Ceph File System

The Ceph File System (CephFS) is another significant feature of Ceph. It’s a POSIX-compliant file system that uses a Ceph Storage Cluster to store data, allowing for the usual file operations while adding scalability, reliability, and performance.

The Ceph MDS (metadata servers) play a crucial role in the operation of CephFS. They manage file metadata, such as file names and directories, allowing the Ceph OSDs to focus on data storage. This separation improves the overall performance of the Ceph storage system.

Ceph Storage Cluster and Proxmox: A Scalable Storage Solution

You leverage a highly scalable storage solution by configuring a Ceph storage cluster in a Proxmox environment. The combined use of object, block, and file storage methods offers versatile data handling suited to various data types and use cases, such as cloud hosting and cloud-based services.

This combination enables managing important business data effectively while maintaining redundancy and fault tolerance. Whether you’re dealing with large file data or smaller objects, using Ceph in a Proxmox environment ensures that your data is safely stored and easily retrievable.

Frequently Asked Questions (FAQs)

How does Ceph Storage achieve fault tolerance?

Ceph storage is inherently fault-tolerant due to its use of controlled data replication. Data stored in a Ceph cluster is automatically replicated across multiple OSDs. Ceph can recover data from other nodes if one node fails, ensuring no data loss. The CRUSH algorithm helps to maintain this fault tolerance by dynamically adjusting the data distribution across the cluster in response to node failures.

Can Ceph Storage handle diverse data types?

Absolutely! Ceph’s ability to handle object, block, and file storage makes it versatile and flexible. Ceph uses RADOS Block Devices for block storage, the Ceph filesystem for file storage, and RADOS for object storage. This versatile design enables Ceph to manage diverse data types and workloads, making it an excellent fit for varied data needs.

How does cache tiering enhance Ceph’s performance?

Cache tiering is a performance optimization technique in Ceph. It uses smaller, faster storage (like SSDs) as a cache for a larger, slower storage tier. Data is accessed frequently and moved to the cache tier for quicker retrieval. This setup significantly improves read/write performance, making Ceph an excellent option for high-performance applications.

How is Ceph storage beneficial for cloud hosting?

Ceph is a highly scalable, resilient, and performance-oriented storage system, making it an excellent choice for cloud hosting. With its fault tolerance, data replication, and block, object, and file storage support, Ceph can effectively handle the vast and diverse data needs of cloud-based services.

What role do metadata servers play in the Ceph file system?

Metadata servers, or Ceph MDS, manage the metadata for the Ceph filesystem. They handle file metadata such as file names, permissions, and directory structures, allowing the Ceph OSDs to concentrate on data management. This separation boosts performance, making the file system operations more efficient.

Is Ceph Storage a good fit for enterprise-level deployments?

Yes, Ceph is suitable for enterprise-level deployments. Its scalability, robustness, and versatility make it an ideal storage system for large businesses. With its features like thin provisioning, cache tiering, and scalable hashing with the CRUSH algorithm, Ceph can handle vast amounts of data and diverse workloads that large enterprises typically require.

Video covering Proxmox and Ceph configuration

Proxmox 8 Cluster with Ceph storage

Wrapping up

Ceph storage offers a robust and highly scalable storage solution for Proxmox clusters, making it an excellent option for anyone seeking an efficient way to manage extensive amounts of data and have a highly available storage location for workloads in the home lab or in production. By following this guide, you can implement a Ceph storage cluster in your Proxmox environment and leverage the numerous benefits of this powerful and flexible storage system.

Remember, the versatility of Ceph allows for many configurations tailored to meet specific needs. So, explore the various features of Ceph storage and find a solution that perfectly fits your data storage and management needs.

Subscribe to VirtualizationHowto via Email 🔔

Enter your email address to subscribe to this blog and receive notifications of new posts by email.

Brandon Lee

Brandon Lee is the Senior Writer, Engineer and owner at Virtualizationhowto.com and has over two decades of experience in Information Technology. Having worked for numerous Fortune 500 companies as well as in various industries, Brandon has extensive experience in various IT segments and is a strong advocate for open source technologies. Brandon holds many industry certifications, loves the outdoors and spending time with family.

Related Articles

3 Comments

  1. Very Interesting.
    I’m planning to create a Proxmox cluster with 3 node with mini PC with 2×2,5 Gbps NIC and I want to implement Ceph as shared storage. But I have red about probably issue with non 10 Gbps NIC. For a small environment, do you think that 10 Gbps inteface is mandatory? I plan to execute VM for local DNS, 2x Windows 11 VM, a Nextcloud instance, a VM dedicated for MariaDB, a torrent server, 2x VM HAproxy, Graylog VM. Additional some other linux VM for test purpose (testing kubernetes, docker swarm, ecc.)…

    Thanks in advice.

    1. Alessandro,

      Thank you for the comment! I don’t think there would be any issues with 10 Gbps connections as this would be the preferred. However, I will say there shouldn’t be any issues with 2.5 Gbps connections with a small Ceph environment. What type of mini PC are you using?

      Thanks Alessandro,
      Brandon

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.