Ceph Nautilus Lab
Contents
Summary⌗
This is the starting point for future ceph labs or test. It is designed with a mixture of drive sizes to allow for different labs and scenarios.
It should take around an hour to build from scratch using the quick setup scripts
Setup VMs⌗
There will be 13 VMs set up and 2 networks. The Detailed Setup shows the full setup on 1 OSD node. Using the Quick Setup script will create the environment from scratch.
Base System Requirements (Does not include optional nodes)
- CPU >= 44
- Memory >= 82GB
- Disk >= 580GB
Host | Role | Count | vCPU | Memory | Disk Size | OSD Disks | OSD Disk Size | Optional |
---|---|---|---|---|---|---|---|---|
bastion | bastion | 1 | 2 | 2GB | 40GB | 0 | No | |
grafana | grafana | 1 | 2 | 4GB | 40GB | 0 | No | |
monitor | mon/mgr | 3 | 4 | 4GB | 40GB | 0 | No | |
t1-osd | osd | 4 | 4 | 8GB | 40GB | 4 | 5GB | No |
t2-osd | osd | 4 | 4 | 8GB | 40GB | 4 | 10GB | No |
rgw | rgw | 2 | 2 | 4GB | 40GB | 0 | Yes | |
mds | mds | 2 | 4 | 8GB | 40GB | 0 | Yes | |
iscsi | iscsi | 2 | 4 | 8GB | 40GB | 0 | Yes |
Detailed Setup⌗
Create Networks⌗
- Ceph presentation network
- Ceph replication network
- Create these in libvirt
Create VM Example⌗
This will create an OSD node. For other nodes, there wont be the need for as many drives to be created.
- Create the OS drive for the node
- Expand the OS base image into the drive. For this setup it will be using CentOS 7
- Customise the OS so it can be used
- Create the 4 OSD drives (5GB for t1 nodes)
- Define the VM, with both networks and all drives attached. Remove
--dry-run
and--print-xml
in order to create the domain.
Quick Setup⌗
Script options are set as variables. By default it won’t build any of the optional nodes. If the vars are set to yes
this will be build them. It is somewhat idempotent as well. A teardown script is also available to clean this all up.
The nodes built by the script (including optional)
Hostname | Public IP | Replication IP |
---|---|---|
bastion.ceph.lab | DHCP | None |
grafana.ceph.lab | DHCP | None |
ceph-mon01.ceph.lab | 10.44.20.21 | 172.16.20.21 |
ceph-mon02.ceph.lab | 10.44.20.22 | 172.16.20.22 |
ceph-mon03.ceph.lab | 10.44.20.23 | 172.16.20.23 |
ceph-t1-osd01.ceph.lab | 10.44.20.31 | 172.16.20.31 |
ceph-t1-osd02.ceph.lab | 10.44.20.32 | 172.16.20.32 |
ceph-t1-osd03.ceph.lab | 10.44.20.33 | 172.16.20.33 |
ceph-t1-osd04.ceph.lab | 10.44.20.34 | 172.16.20.34 |
ceph-t2-osd01.ceph.lab | 10.44.20.41 | 172.16.20.41 |
ceph-t2-osd02.ceph.lab | 10.44.20.42 | 172.16.20.42 |
ceph-t2-osd03.ceph.lab | 10.44.20.43 | 172.16.20.43 |
ceph-t2-osd04.ceph.lab | 10.44.20.44 | 172.16.20.44 |
ceph-rgw01.ceph.lab | 10.44.20.111 | None |
ceph-rgw02.ceph.lab | 10.44.20.112 | None |
ceph-mds01.ceph.lab | 10.44.20.121 | None |
ceph-mds02.ceph.lab | 10.44.20.122 | None |
ceph-iscsi01.ceph.lab | 10.44.20.131 | None |
ceph-iscsi02.ceph.lab | 10.44.20.132 | None |
Scripts can also be found on GitHub
Demo⌗
Adding More Disks⌗
If there’s a capacity or a need to add some more drives to the OSD nodes this example will add more drives to the OSD VMs
{f..g}
will add 2 more drives to dev/vdf
and /def/vdg
. Change this to add more.
Cleanup⌗
Cleanup bash script to remove all the parts of the Ceph lab
Scripts can also be found on GitHub
Ceph Install⌗
Requirements⌗
This guide will use ceph-deploy
so requires these steps before starting
- Chrony or NTP
- LVM2
Example Ansible inventory file for confirming and setting up requirements.
- Enable and start chronyd service
- Confirm chrony is working
- Install podman
- Ensure python3 is installed
Requirements Playbook⌗
Ceph-Deploy⌗
Following steps are run from the bastion.ceph.lab
node
This section is described in detail on the Ceph Docs Site
Monitors and Managers⌗
- Make new ceph directory on the bastion node
- Change to the new ceph directroy
- Create the cluster
- Add the below lines to
ceph.conf
- Install Ceph packages to the monitors
- Deploy the initial monitors
- Deploy the admin keys to the monitors
- Add the MGR daemon to the first 2 monitors
- Ensure
ceph-common
package is installed on the bastion and copy all config to/etc/ceph/
- Check
ceph
commands work from the bastion node
- Example output
OSDs⌗
- Install ceph packages on all the OSD nodes
- Create OSDs on the available drives. In this example
/dev/vdb
->/dev/vde
are available for OSDs
- Once this has completed, check that all OSDs have been registered in the cluster
- Example output
Add Rados Gateway⌗
For other daemon types, Ceph docs details how to configure them.
- Install Ceph on the RGW nodes
- Add the below lines to
ceph.conf
to deploy RGWs with Beast as the front end
- Deploy the RadosGW daemons
- Now there should be 4 new pools created for the RGW objects
- Example output
Ceph Dashboard⌗
These steps outline how to enable the ceph dashboard
- Install the Ceph Dashboard package on the monitor nodes
- Enable the Dashboard module
- Disable SSL on the Dashboard
- Setup the admin user and password for the dashboard
Access to the dashboard should now be availabe at http://10.44.20.21:8080
using the above crentials
Add RGW Management to Dashboard⌗
- Create an admin RGW user
- Grab the access and secret key for this user
- Configure the dashboard to use these keys
Add a User to Rados Gateway⌗
Object Gateway users can either be created via the dashboard, assuming this has been configured or via the CLI
- To add a user to the Object gateway
Conclusion⌗
At this point there should be a running ceph cluster at release Nautilus with, optionally 2 Rados Gateway nodes and the associated pools.
Cleanup⌗
There is a teardown.sh
script in the Github repo which will remove all the VMs and their storage volumes.