OpenStack
Storware Backup & Recovery supports backup for OpenStack:
Disk attachment through Cinder with generic incremental (preferred):
supports all hypervisors and storages
supports incremental backup
proxy VM is required - used for the disk attachment process.
Disk image transfer - for KVM hypervisors with VMs using QCOW2
Volumes or Ceph-based storage:
supports incremental backup
disk images are transferred directly from API (no Proxy VM required)
Disk attachment through Cinder:
supports all hypervisors and storages
no incremental backup
proxy VM is required - used for the disk attachment process.
Backup Strategies
Libvirt strategy
Storware Backup & Recovery supports OpenStack environments that use KVM hypervisors and VMs running on QCOW2 or RAW files. Storware Backup & Recovery communicates with OpenStack APIs such as Nova and Glance to collect metadata and for the import of the restored process. However, the actual backup is done over SSH directly from the hypervisor. Storware Backup & Recovery Node can be installed anywhere - it just needs to have access to the OpenStack APIs and hypervisor SSH via a network. Both full and incremental backups are supported.

Backup Process
direct access to the hypervisor over SSH
crash-consistent snapshot taken directly using virsh (QCOW2/RAW file), rbd snapshot for Ceph (separate call for each storage backend)
optional application consistency using pre/post snapshot command execution • QCOW2/RAW-file data exported over SSH (optionally with netcat)
Ceph RBD data exported using rbd export or RBD-NBD when incremental is used
metadata exported from OpenStack APIs (nova, glance, cinder)
the last snapshot kept on the hypervisor for the next incremental backup (if at least one schedule assigned to the VM has backup type set to incremental)
restore recreates files/volumes according to their backend (same transfer mechanism as used in backup) and then defines VM on the hypervisor
Disk attachment
Storware Backup & Recovery also supports the disk-attachment method using cinder. This should allow you to use cinder-compatible storage and still allow Storware Backup & Recovery to create backups. Incremental backup is supported in disk attachment changed block tracking (which has higher CPU overhead). Storware Backup & Recovery needs to communicate OpenStack service's API to attach drives to the proxy VM with Storware Backup & Recovery Node installed.

Backup Process
crash-consistent snapshot using cinder API
optional application consistency using pre/post snapshot command execution
metadata exported from API
volumes created from snapshotted disks are mounted one by one to the Proxy VM
data read directly on the Proxy VM
incremental backups supported for Ceph RBD - a list of the changed blocks are fetched from the monitors, and only these blocks are read from the attached disk on the Proxy VM
if an instance is created from the glance image and "download image from glance" option is enabled data is downloaded from glance API, an instance is created from the instance metadata, and the images which are fetched from the glance API
restore creates empty disks on the Proxy VM, imports merged data then recreates the VM using these volumes, it will try to use the image from a glance if present in the target environment or it will upload the image to the glance and register it with the restored VM
Ceph RBD storage backend
Storware Backup & Recovery also supports deployments with Ceph RBD as a storage backend. Storware Backup & Recovery communicates directly with Ceph monitors using RBD export/RBD-NBD when used with the Libvirt strategy or - when used with the Disk-attachment method - only during incremental backups (snapshot difference).
Libvirt strategy

Disk attachment strategy

Storware Backup & Recovery supports OpenStack with Ceph RBD volumes. Here is an example of a typical (expected) section that needs to be added in cinder.conf for Ceph in the OpenStack environment:
[rbd]
volume_backend_name = rbd
volume_driver = cinder.volume.drivers.rbd.RBDDriver
rbd_pool = volumes
rbd_ceph_conf = /etc/ceph/ceph.conf
rbd_flatten_volume_from_snapshot = false
rbd_max_clone_depth = 5
rbd_store_chunk_size = 4
rados_connect_timeout = -1
glance_api_version = 2
rbd_user = volumes
rbd_secret_uuid = ce6d1549-4d63-476b-afb6-88f0b196414fA good article on how to set up Ceph with OpenStack can be found here.
To set up the Openstack HVM with Ceph RBD volumes in Storware Backup & Recovery:
Add Ceph storage as described here
Add the hypervisor manager as described here.
Go to
Virtual Environments->Infrastructure->Clustersand select cluster that is used by Openstack.

In
Storage Providerfield select previously added Ceph storage.

Now you can save and sync the inventory - if Ceph communication works properly you should be able to see Hypervisor Storage entries (in Hypervisors -> Storage tab) representing your Ceph storage pools.
Supported features
Supported backup strategies: Disk attachment (preferred), Image transfer, with generic incremental (preferred, if Ceph monitors are not accessible)
Supported versions
Queens, Rocky, Stein, Train, Ussuri, Victoria, Wallaby, Xena, Yoga, Zed, Antelope, Bobcat, Caracal, Dalmatian, Epoxy
Queens, Rocky, Stein, Train, Ussuri, Victoria, Wallaby, Xena, Yoga, Zed, Antelope, Bobcat, Caracal, Dalmatian, Epoxy
Queens, Rocky, Stein, Train, Ussuri, Victoria, Wallaby, Xena, Yoga, Zed, Antelope, Bobcat, Caracal, Dalmatian, Epoxy
The last snapshot is kept on the hypervisor for incremental backups
Yes
No
Yes
Access to hypervisor OS required
No
No
Yes
Proxy VM required
Yes
Yes
No
Full backup
Supported
Supported
Supported
Incremental backup
Supported *
Supported
Supported *
Restore
Supported
Supported
Supported
File-level restore
Supported
Supported
Supported
VM disk exclusion
Supported
Supported
Supported
Quiesced snapshots
Not supported
Not supported
Not supported
Snapshots management
Supported **
Supported **
Not supported
Pre/post command execution
Supported
Supported
Supported
Access to VM disk backup over iSCSI
Supported
Supported
Supported ***
VM name-based policy assignment
Supported
Supported
Supported
VM tag-based policy assignment
Supported
Supported
Supported
Power-on VM after restore
Not supported (always on)
Not supported (always on)
Not supported (always on)
* Ceph RBD volumes only ** Without snapshot revert *** Ceph RBD/RAW disks only
Network requirements
Disk attachment
Connection URL: https://KEYSTONE_HOST:5000/v3
Node
Keystone, Nova, Glance, Cinder
ports that were defined in endpoints for OpenStack services
API access to the OpenStack management services - using endpoint type that has been specified in hypervisor manager details
Node
Ceph monitors
3300/tcp, 6789/tcp
if Ceph RBD is used as the backend storage - used to collect changed-blocks lists from Ceph
SSH transfer
Connection URL: https://KEYSTONE_HOST:5000/v3
Node
Hypervisor
22/tcp
SSH access
Hypervisor
Node
netcat port range defined in node configuration - by default 16000-16999/tcp
optional netcat access for data transfer
Node
Ceph monitors
3300/tcp, 6789/tcp, 10809/tcp
if Ceph RBD is used as the backend storage - used for data transfer over NBD
QCOW2 files on NFS storage
Example: scenario QCOW2 files residing on NFS
You can configure the NFS volume backend here:
https://docs.openstack.org/cinder/rocky/admin/blockstorage-nfs-backend.html
Make sure the QCOW2 volumes are enabled.
For an NFS backend, it's recommended to set these values in /etc/cinder/cinder.conf:
default_volume_type = nfs
nfs_sparsed_volumes = true
nfs_qcow2_volumes = true
volume_driver = cinder.volume.drivers.nfs.NfsDriver
enabled_backends = nfsNova volumes
Storware Backup & Recovery is able to backup nova volumes using libvirt strategy. In the hypervisor manager settings there is an option Download image from glance. When this option is enabled the original image from glance is downloaded. If it is disabled, then the image is not downloaded, however the nova volume created from it is backed up.
Adding hypervisor managers
When creating the hypervisor manager, provide the following data in the fields:
URL - Keystone API URL, e.g. https://10.201.32.40:5000/v3
Authentication domain:
name - name of domain
domainId - optional domain id
user - OpenStack user.
password - password for that user.
default project - name of default project in domain.
Scope VMs to Domain - you can create one or more Authentication Domains based on this setting, as described in the Authentication Domains section below.
Download image from a glance - allows Storware Backup & Recovery to use images from a glance as described in the disk attachment strategy
When you index the hypervisor manager, make sure you provide the correct SSH credentials for each hypervisor that appeared on the Hypervisors tab. You can also use SSH public key authentication.
Authentication Domains
Storware Backup & Recovery supports OpenStack environments with multiple domains. Each OpenStack Hypervisor Manager needs to have at least one Authentication Domain provided.
Storware Backup & Recovery supports two types of domain authorization:
Unscoped - single credentials to multiple domains
Scoped - single credentials to single domain
Single credentials to single domain
Scoping VMs to Domain option needs to be turned on.
In that setup user can create Authentication Domains for every Domain in OpenStack environment. Projects and Virtual Machines are only scanned in provided Authentication Domains.
Single credentials to multiple domains
Scoping VMs to Domain option needs to be turned off.
In that setup user need to create only one Authentication Domain. Projects and Virtual Machines are scanned in every domain that provided user has access to.
Openstack tags
To list tags for specific instance:
root@c254:~# nova show d6787375-ea0c-49fd-878b-35b71747c62a |grep tags
| tags | ["test"]Tags for Openstack requires nova API version >= 2.26.
Openstack Access Keys
During Inventory Synchronization, Storware Backup & Recovery scans all Keypairs (to which a user has access) and stores them as Access Keys. When restoring an instance, the user can specify the Access Key.
Openstack Flavor
During Inventory Synchronization, Storware Backup & Recovery scans all Flavors and saves their configuration. When restoring an instance, the user can specify the flavor.
Instant restore
Node configuration
First step to initiate the configuration process for OpenStack Instant Restore service, is creating a directory that will be accessible for mounting by the NFS server. Create a specific directory on your Node machine that will be used as the target space for Instant Restore's shared resources.
mkdir /vprotect_data/instant_restore/
chown -R vprotect:vprotect /vprotect_data/instant_restore/Next, create an NFS share that will allow access to the /vprotect_data/ directory from other machines in the network. Sharing this directory will enable OpenStack clients to use it for virtual machine restoration.
echo '/vprotect_data/ *(fsid=0,no_subtree_check,rw,sync,no_root_squash,insecure)' >> /etc/exports
exportfs -arvOpenStack configuration
After creating the NFS share on the node, you need to configure the NFS backend in the OpenStack environment. This step will allow OpenStack to access resources stored on the node via NFS. Edit the /etc/cinder/cinder.conf file and add this section at the end of the file:
[nfs-instant-restore]
volume_backend_name=nfs-instant-restore
volume_driver=cinder.volume.drivers.nfs.NfsDriver
nfs_shares_config=/etc/cinder/nfs_instant_restore
nfs_snapshot_support=True
nfs_qcow2_volumes=True
nfs_sparsed_volumes=true
nfs_mount_options=vers=4Create the file you provided in the configuration as the value of nfs_shares_config parameter
vi /etc/cinder/nfs_instant_restoreand path to the NFS share:
sbr_node_ip:/instant_restoreAfter creating the NFS server and configuring cinder, restart the cinder volume service. Please note that the name of this service may be different depending on the OpenStack version.
systemctl restart openstack-cinder-volumeStorware web UI configuration
After completing the inventory synchronization of the OpenStack, in Node edition window you can select the storage with the NFS backend configuration. Details about the NFS backend should be supplied by the OpenStack administrator.

You should now be able to select "Instant restore" for backed up virtual machines.
Limitations
Storware Backup & Recovery does not backup and restores keypairs that user used in Storware Backup & Recovery doesn't have access to. The restored instance will have no keypairs assigned. In such a case, the keypairs have to be backed up and restored manually under the same name before restoring the instance.
For the libvirt strategy only, QCOW2/RAW files or Ceph RBD are supported as the backend.
The disk attachment method with Ceph requires access to the monitors from the Proxy VM.
Last updated