As a part of a course called Data Storage Technology and Networks in BITS Pilani – Hyderabad Campus, I took up project to integrate Ceph Storage Cluster with OpenStack. To integrate both of them, we first need to deploy Ceph Storage Cluster on more than 1 machine (we will use 2 machines for the purpose). This blog post will give you exact steps on how to do that.
Before starting, let me tell you that deploying Ceph Cluster on 2 nodes is just for learning purpose. In production environment, there are tens of machines, if not hundreds, to serve as nodes for the storage cluster.
Below is the schematic diagram that gives a basic idea of what we are trying to achieve.
ceph1: This node would become the admin node, the monitor node and would also serve as one of the Object Storage Devices (OSD).
ceph2: This node would serve as an Object Storage Device.
Configurations of ceph1 and ceph2 are – OS: CentOS 7, RAM: ~2GB, HDD: ~150 GB.
Ceph version we are going to install is: v9.2.1 – Infernalis
Let’s get started.
Step 0: This step kind of ensures everything goes smooth from the network point of view.
In CentOS 7, the default configuration has the network interfaces off by default. By that, I mean the interfaces would not have an IP address assigned to it whenever you boot your machine. Let’s make the IP static and make it on by default.
Run following command (on both ceph1 and ceph2) to know the interface name of your machine through which network operations would be carried out:
$ ifconfig -a
Let’s say we have got {iface} as the interface name. Now switch to root user and do the following:
# cd /etc/sysconfig/network-scripts
Locate ifcfg-{iface} in the directory and run following:
# nano ifcfg-{iface}
Locate attributes BOOTPROTO and ONBOOT. Edit them in the following way:
BOOTPROTO=static ONBOOT=yes
Reboot the machines and ensure you get an IP address after rebooting.
Now we will edit hostnames of each machine and let each machine know which host to find on which IP address. We will assign ‘ceph1’ as the hostname for ceph1 and ‘ceph2’ as the hostname for ceph2. Let us assume the IP address of ceph1 is 172.16.6.39 and that of ceph2 is 172.16.6.25.
On each ceph node, login as root user and run following command:
# nano /etc/sysconfig/network
Add following lines if not present(on ceph1):
HOSTNAME=ceph1
and on ceph 2:
HOSTNAME=ceph2
Save changes and exit.
Now we will tell ceph1 on which IP address ceph2 is located and also vice-versa. As a root user, run following:
# nano /etc/hosts
Add following lines to the file (on both ceph1 and ceph2):
172.16.6.39 ceph1 172.16.6.25 ceph2
Save the file and exit.
Now we will use hostname program to change the hostname that is currently set.
# hostname ceph1
and on ceph2:
# hostname ceph2
Now run following command to make the changes come in effect on both the machines:
# service network restart
Now the final thing we need to take care of. We will disable and stop the firewall service on each of the nodes for smooth operation. I ran into errors when I did not stop the service. You can find more about the error in this mailing list.
Let’s disable the service:
# systemctl disable firewalld # systemctl stop firewalld
That’s it. Now we can move to ceph deployment.
Step 1: Adding ceph-deploy repositories on the admin node. Run following commands:
$ sudo yum install -y yum-utils && sudo yum-config-manager --add-repo https://dl.fedoraproject.org/pub/epel/7/x86_64/ && sudo yum install --nogpgcheck -y epel-release && sudo rpm --import /etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7 && sudo rm /etc/yum.repos.d/dl.fedoraproject.org*
You might run into errors regarding GPG key. Solution is mentioned in this blog post.
Step 2: Now we are going to add the ceph yum repo. Use the file path /etc/yum.repos.d/ceph-deploy.repo.
$ sudo nano /etc/yum.repos.d/ceph-deploy.repo
Add following lines in the file:
[ceph-noarch] name=Ceph noarch packages baseurl=http://download.ceph.com/rpm-infernalis/el7/noarch enabled=1 gpgcheck=1 type=rpm-md gpgkey=https://download.ceph.com/keys/release.asc
Step 3: Updating repositories and installing ceph-deploy package.
$ sudo yum update && sudo yum install ceph-deploy
Step 4: The admin node must be have password-less SSH access to Ceph nodes. When ceph-deploy logs in to a Ceph node as a user, that particular user must have passwordless sudo privileges.
Let’s install NTP on each ceph1 and ceph2:
$ sudo yum install ntp ntpdate ntp-doc
Now we will install SSH on each of ceph1 and ceph2:
$ sudo yum install openssh-server
Ensure SSH is running on the nodes:
$ ps aux | grep sshd
To make the process of SSHing the ceph nodes easy, we will edit SSH config file.
$ nano ~/.ssh/config
Add following lines if not present
Host ceph2 Hostname ceph2 User ceph
Step 5: This is an important step. We will now create a new user on each Ceph node.
The ceph-deploy utility must login to a Ceph node as a user that has passwordless sudo privileges, because it needs to install software and configuration files without prompting for passwords.
Let’s create a new user on each ceph1 and ceph2. Run following commands on each of them.
$ sudo useradd -d /home/ceph -m ceph $ sudo passwd ceph
Adding the new user to the sudoers list.
$ echo "ceph ALL = (root) NOPASSWD:ALL" | sudo tee /etc/sudoers.d/ceph $ sudo chmod 0440 /etc/sudoers.d/ceph
Now enabling password-less SSH. Since ceph-deploy would not prompt for a password, we must generate SSH keys on the admin node (ceph1) and distribute the public key to each Ceph node (here ceph2).
Run following command to generate SSH keys.
$ ssh-keygen
When asked for passphrase, leave it blank.
Now we would copy the key to ceph2 by running following command:
$ ssh-copy-id ceph@ceph2
Step 6: Now we are going to install priorities packages. What are priorities packages?
Run following command:
$ sudo yum install yum-plugin-priorities
Great! You just completed the first and the most important half of this tutorial. You can take a break now if you want to. 🙂
Now moving to the second half.
Step 7: On admin node, i.e. ceph1, do following:
$ cd /home/ceph/ $ mkdir my_cluster $ cd my_cluster
Disable requiretty, as while installing ceph-deploy, you might encounter errors. To do that run:
$ sudo sed -i s'/Defaults requiretty/#Defaults requiretty'/g /etc/sudoers
Step 8: Let’s create a cluster now.
Ensure that we are on ceph1 and also that you are under /home/ceph/my_cluster directory. Now execute following commands
$ ceph-deploy new ceph1
This command will create 3 files. One of them is ceph.conf. Let’s edit the file to let ceph know that we will have a setup of 2 OSDs.
$ nano ceph.conf
Under [global] section append:
osd pool default size = 2
Step 9:
Let’s install ceph on all the nodes.
$ ceph-deploy install ceph1 ceph2
Let’s add initial monitor and gather the keys:
$ ceph-deploy mon create-initial
This will generate below mentioned files:
ceph.client.admin.keyring ceph.bootstrap-osd.keyring ceph.bootstrap-mds.keyring ceph.bootstrap-rgw.keyring
Step 10: Now we will add 2 OSDs. In a production environment, there are special disks assigned as OSDs typically. For our basic setup, we will use directories rather than whole disks.
On ceph1, execute following command:
$ cd /home/ceph $ mkdir osd0
On ceph2, execute following command (you can also use ssh utility):
$ cd /home/ceph $ mkdir osd1
Let’s prepare OSDs by running following commands on ceph1:
$ cd /home/ceph/my_cluster $ ceph-deploy osd prepare ceph1:/home/ceph/osd0 ceph2:/home/ceph/osd1
Now let’s activate the prepared OSDs. Run following command on ceph1:
$ ceph-deploy osd activate ceph1:/home/ceph/osd0 ceph2:/home/ceph/osd1
Please note that, preparing and activating may fail if the firewall service is not disabled. If you encounter error similar to the one mentioned below, try disabling and stopping the firewall service and try again.
[ceph1][WARNIN] No data was received after 300 seconds, disconnecting...
After successfully activating the OSDs, reboot the machines. It is very important that you reboot the machines.
Step 11: After rebooting, let’s check if the OSDs are running. Run following command on ceph1:
$ ceph osd tree
Typical output of the above command will enlist osd.0 and osd.1. If not, then you have to check again if firewall service is running. If so, disable and stop it, and activate OSDs again (don’t prepare the OSDs again; they need to prepared only once).
If you see all the OSDs in the output, then that means we have deployed the cluster successfully!
Step 12: Now we will use ceph-deploy to copy the configuration file and admin key to our admin node and our Ceph Nodes so that we can use the ceph CLI without having to specify the monitor address and ceph.client.admin.keyring each time we execute a command.
$ sudo chmod +r /etc/ceph/ceph.client.admin.keyring $ ceph-deploy admin ceph1 ceph2
First command ensures we have correct permission for ceph.client.admin.keyring.
After this step, make sure the machines are rebooted.
Step 13: Final step. Now we will see the health of our cluster by running following command on ceph1:
$ ceph health
The desired output is:
HEALTH_OK
Now we will see the status of our cluster. Run the following command on ceph1:
$ ceph status
The output should be similar to the output I got on my machine, which is depicted below:
cluster 9cb53496-a559-401a-a16f-cc3a3df8c1c4 health HEALTH_OK monmap e1: 1 mons at {ceph1=172.16.6.39:6789/0} election epoch 1, quorum 0 ceph1 osdmap e12: 2 osds: 2 up, 2 in flags sortbitwise pgmap v1415: 64 pgs, 1 pools, 0 bytes data, 0 objects 28708 MB used, 115 GB / 143 GB avail 64 active+clean
The output should have active+clean status.
If you have reached here that means you have completed the tutorial. Bravo!
Few tips:
The errors I encountered during the installation and deployment were mostly due to lack of permission to ceph user to access and/or modify files. So make sure that you have got enough permission.
If you are unsure what permissions to give, run following commands(you must have root privileges):
$ sudo chmod 664 /path/to/file $ sudo chmod -R 664 /path/to/directory
Note that, this is for the test environments only, like a lab. Not for production environment. Do ask your supervisor/boss/anybody-above-you before making significant changes.
Hope I helped. Have fun. Happy hacking!
References:
[1] http://docs.ceph.com/docs/master/start/
[2] http://www.virtualtothecore.com/en/adventures-with-ceph-storage-part-5-install-ceph-in-the-lab/
[3] http://www.xenocafe.com/tutorials/linux/redhat/change_hostname_without_reboot/
One Comment
Hi Sohilladhani can you please share me this in the form of video