# Ceph RBD

## General

In order to connect to Ceph RBD you need to provide the keyring and configuration files. The Ceph RBD storage provider should detect the volumes and pools in the environment and allow you to assign backup policies. Storware Backup & Recovery uses the RBD-NBD approach to mount a remote RBD snapshot over NBD and read data.

{% hint style="info" %}
**Note:**

* Storware Backup & Recovery needs access to the monitors specified in the Ceph configuration file.
* When creating Ceph RBD storage provider for the OpenStack environment, only the credentials specified in the storage provider form are used by the OpenStack backup process - the actual technique (RBD-NBD mount or cinder in disk-attachment strategy) and **node** for connecting and the backup volumes depend on the OpenStack hypervisor manager settings, not in the storage provider settings.
  {% endhint %}

## Supported features

Requires Red Hat Ceph Storage version 4.0 or newer or Ceph v14.2.0 Nautilus or newer

| Full backup                       | Supported                 |
| --------------------------------- | ------------------------- |
| Incremental backup                | Supported (RBD snap-diff) |
| Restore                           | Supported                 |
| Single item restore               | Supported                 |
| Access to files backup over iSCSI | Supported                 |
| Name-based policy assignment      | Supported                 |

## Example

Complete the following steps to add the Ceph RBD storage provider:

* Storware Backup & Recovery Node supports Ceph RBD, for which you will need to install ceph libraries:
  * On Storware Backup & Recovery **Node** enable the required repositories:

{% hint style="info" %}
Remember to select the version of the Ceph repository that is compatible with the version of the cluster in your environment.
{% endhint %}

For Storware Backup & Recovery node installed on RHEL 7:

```
sudo subscription-manager repo --enable=rhel-7-server-rhceph-4-tools-rpms
```

For Storware Backup & Recovery node installed on RHEL 8:

```
sudo subscription-manager repo --enable=rhceph-4-tools-for-rhel-8-x86_64-rpms
```

For Storware Backup & Recovery node installed on CentOS 7:

```
sudo yum install epel-release
sudo rpm --import 'https://download.ceph.com/keys/release.asc'
sudo yum install https://download.ceph.com/rpm-octopus/el7/noarch/ceph-release-1-1.el7.noarch.rpm
```

For Storware Backup & Recovery node installed on CentOS Stream 8:

```
sudo yum install epel-release
sudo rpm --import 'https://download.ceph.com/keys/release.asc'
sudo yum install https://download.ceph.com/rpm-octopus/el8/noarch/ceph-release-1-1.el8.noarch.rpm
```

For Storware Backup & Recovery node installed on CentOS Stream 9:

```
sudo yum install epel-release
```

Add Ceph repository

```
vi /etc/yum.repos.d/ceph.repo
```

```
[ceph]
name=Ceph packages for $basearch
baseurl=https://download.ceph.com/rpm-reef/el9/$basearch
enabled=1
priority=2
gpgcheck=1
gpgkey=https://download.ceph.com/keys/release.asc
```

* Install the rbd-nbd and ceph-common package, with all dependencies:

  ```
  yum install rbd-nbd ceph-common
  ```
* Go to `Storage` -> `Infrastructure` and click `Create` button
* Choose `Ceph RBD` as the type and select the node configuration responsible for backup operations
* Click `Upload keyring file` button and select `Ceph keyring file` which can be obtained from the Cinder host - for example in `/etc/ceph/ceph.client.admin.keyring`
* Provide `Ceph configuration file content`, for example:

  ```
    [global]
    cluster network = 10.40.0.0/16
    fsid = cc3a4e9f-d2ca-4fec-805d-2c40605723b3
    mon host = ceph-mon.domain.local
    mon initial members = ceph-00
    osd pool default crush rule = -1
    public network = 10.40.0.0/16
    [client.images]
    keyring = /etc/ceph/ceph.client.images.keyring
    [client.volumes]
    keyring = /etc/ceph/ceph.client.volumes.keyring
    [client.nova]
    keyring = /etc/ceph/ceph.client.nova.keyring
  ```

{% hint style="info" %}
**Note:** Remember, above content need to end with the new line sign.
{% endhint %}

* If you want to index only ceph pools of your choice, change `Storage pool management strategy` to `INCLUDE` and add storage pool names.
* Click `Save` - now you can initiate inventory synchronization (pop-up message) to collect information about available volumes and pools
  * later you can use the `Inventory Synchronization` button on the right of the newly created provider on the list.
* Your volumes will appear in the`Instances` section in the submenu on the left, from which you can initiate backup/restore/mount tasks or view volume backup history and its details.
