OpenStack
Overview
Storware Backup & Recovery supports backup for OpenStack using 3 strategies:
Libvirt strategy (SSH Transfer):
supporting nova volumes, and QCOW2-based backup from KVM hosts
optionally direct transfer of Ceph RBD volumes directly from Ceph
proxy VM installed in each AZ
volume data transferred via cinder-based disk attachment
full backups only for non-Ceph volumes
optional CBT-like incremental backups when Ceph Storage provider is configured (Ceph connectivity only used to fetch changed blocks information)
Disk attachment with generic incremental (preferred):
proxy VM installed in each AZ
volume data transferred via cinder-based disk attachment
storage backend-independent incremental backups using block-level checksum comparison
Backup Strategies
Libvirt strategy (SSH Transfer)
Storware Backup & Recovery supports OpenStack environments that use KVM hypervisors and VMs running on QCOW2 or RAW files. Storware Backup & Recovery communicates with OpenStack APIs such as Nova and Glance to collect metadata and for the import of the restored process. However, the actual backup is done over SSH directly from the hypervisor. Storware Backup & Recovery Node can be installed anywhere - it just needs to have access to the OpenStack APIs and hypervisor SSH via a network. Both full and incremental backups are supported.

Backup process:
direct access to the hypervisor over SSH
crash-consistent snapshot taken directly using virsh (QCOW2/RAW file), rbd snapshot for Ceph (separate call for each storage backend)
optional application consistency using pre/post snapshot command execution • QCOW2/RAW-file data exported over SSH (optionally with netcat)
Ceph RBD volumes:
data exported using rbd export when incremental is used
the last snapshot kept in Ceph for future backups
metadata exported from OpenStack APIs (nova, glance, cinder)
restore recreates files/volumes according to their backend (same transfer mechanism as used in backup) and then defines VM on the hypervisor
Disk attachment
Storware Backup & Recovery also supports the disk-attachment method using cinder. This should allow you to use cinder-compatible storage and still allow Storware Backup & Recovery to create backups.
In general, this strategy supports full backups only. However, for Ceph RBD volumes, if a storage provider is specified in the volume type (Storage tab), CBT-like information can be fetched from Ceph to allow incremental backups.
Storware Backup & Recovery needs to communicate OpenStack service's API to attach drives to the proxy VM with Storware Backup & Recovery Node installed.

Backup process:
crash-consistent snapshot using cinder API
optional application consistency using pre/post snapshot command execution
metadata exported from API
volumes created from snapshotted disks are mounted one by one to the Proxy VM
data read directly on the Proxy VM
incremental backups supported for Ceph RBD - a list of the changed blocks are fetched from the monitors, and only these blocks are read from the attached disk on the Proxy VM
if an instance is created from the glance image and "download image from glance" option is enabled data is downloaded from glance API, an instance is created from the instance metadata, and the images which are fetched from the glance API
restore creates empty disks on the Proxy VM, imports merged data then recreates the VM using these volumes, it will try to use the image from a glance if present in the target environment or it will upload the image to the glance and register it with the restored VM
Disk attachment with generic incremental
Storware Backup & Recovery also supports the disk-attachment method using cinder where incremental backups are supported regardless of the underlying storage backend. This strategy however, requires more resources as it uses block-level checksum computation.
Storware Backup & Recovery needs to communicate the OpenStack service's API to attach drives to the proxy VM with Storware Backup & Recovery Node installed.

Backup process:
crash-consistent snapshot using cinder API
optional application consistency using pre/post snapshot command execution
metadata exported from API
volumes created from snapshotted disks are mounted one by one to the Proxy VM
data read directly on the Proxy VM
incremental backups using checksum computation
if an instance is created from the glance image and "download image from glance" option is enabled data is downloaded from glance API, an instance is created from the instance metadata, and the images that are fetched from the glance API
restore creates empty disks on the Proxy VM, imports merged data then recreates the VM using these volumes, it will try to use the image from a glance if present in the target environment or it will upload the image to the glance and register it with the restored VM
Ceph RBD storage backend
Storware Backup & Recovery also supports deployments with Ceph RBD as a storage backend. Depending on the strategy, the communication between the node and the Ceph monitor is used for a different purpose:
Libvirt strategy - communicates directly with Ceph monitors using RBD protocol:

Disk-attachment method - only for collecting changed block information during incremental backups (snapshot difference):

A good article on how to set up Ceph with OpenStack can be found here.
Supported features
Supported backup strategies: Disk attachment (preferred), Image transfer, with generic incremental (preferred, if Ceph monitors are not accessible)
Supported versions
Wallaby, Xena, Yoga, Zed, Antelope, Bobcat, Caracal, Dalmatian, Epoxy Red Hat OpenStack Platform: 17.1, 18.0
Wallaby, Xena, Yoga, Zed, Antelope, Bobcat, Caracal, Dalmatian, Epoxy Red Hat OpenStack Platform: 17.1, 18.0
Wallaby, Xena, Yoga, Zed, Antelope, Bobcat, Caracal, Dalmatian, Epoxy Red Hat OpenStack Platform: 17.1, 18.0
The last snapshot is kept on the hypervisor for incremental backups
Yes
No
Yes
Access to hypervisor OS required
No
No
Yes
Proxy VM required
Yes
Yes
No
Full backup
Supported
Supported
Supported
Incremental backup
Supported *
Supported
Supported *
Synthetic backups
Supported
Supported
Supported
File-level restore
Supported
Supported
Supported
VM disk exclusion
Supported
Supported
Supported
Quiesced snapshots
Not supported
Not supported
Not supported
Snapshots management
Supported **
Supported **
Not supported
Pre/post command execution
Supported
Supported
Supported
Access to VM disk backup over iSCSI
Supported
Supported
Supported ***
VM name-based policy assignment
Supported
Supported
Supported
VM tag-based policy assignment
Supported
Supported
Supported
Power-on VM after restore
Not supported (always on)
Not supported (always on)
Not supported (always on)
* Ceph RBD volumes only ** Without snapshot revert *** Ceph RBD/RAW disks only
Network requirements
Disk attachment
Connection URL: https://KEYSTONE_HOST:5000/v3
Node
Keystone, Nova, Glance, Cinder, Neutron
ports that were defined in endpoints for OpenStack services
API access to the OpenStack management services - using endpoint type that has been specified in hypervisor manager details
Node
Ceph monitors
3300/tcp, 6789/tcp
if Ceph RBD is used as the backend storage - used to collect changed-blocks lists from Ceph
Disk attachment with generic incremental
Connection URL: https://KEYSTONE_HOST:5000/v3
Node
Keystone, Nova, Glance, Cinder, Neutron
ports that were defined in endpoints for OpenStack services
API access to the OpenStack management services - using endpoint type that has been specified in hypervisor manager details
SSH transfer
Connection URL: https://KEYSTONE_HOST:5000/v3
Note: you also must provide SSH credentials to all hypervisors that have been detected during inventory sync
Node
Hypervisor
22/tcp
SSH access
Hypervisor
Node
netcat port range defined in node configuration - by default 16000-16999/tcp
optional netcat access for data transfer
Node
Ceph monitors
3300/tcp, 6789/tcp, 10809/tcp
if Ceph RBD is used as the backend storage - used for data transfer over NBD
Nova volumes
Storware Backup & Recovery can back up Nova volumes using a libvirt (SSH transfer) strategy, which requires direct access to the hypervisor. The image from which the OS has been booted (without the changes made after instance creation), can also be protected - for more information, check out the Download image from glance option description in the OpenStack section.
Authentication Domains
Storware Backup & Recovery supports OpenStack environments with multiple domains. Each OpenStack Hypervisor Manager needs to have at least one Authentication Domain provided.
Storware Backup & Recovery supports two types of domain authorization:
Unscoped - single credentials to multiple domains
Scoped - credentials to individual domains
Unscoped - single credentials to multiple domains
Use domain-scoped authorization option needs to be turned OFF.
With this setup, the user needs to create only one Authentication Domain. Projects and Virtual Machines are scanned in every domain that the provided user has access to.
Required permissions:
The account must be able to access and enumerate resources in every domain that should be scanned.
Assign to this account an admin role across all domains applicable in your environment, and ensure role inheritance is enabled so access is effective in underlying projects:
You can also use LESS SECURE alternative - system scope:
Scoped - credentials to individual domains
Use domain-scoped authorization option needs to be turned ON.
With this setup, the user can create Authentication Domains for every domain in OpenStack environment. Projects and Virtual Machines are only scanned in the provided Authentication Domains.
Required permissions:
Separate accounts in individual domains must be assigned the admin role on the domain level (not only on a single project).
Inheritance must be enabled so that the domain-level role is propagated “down” to projects contained in that domain.
Tags
Tags in Nova are also scanned (when nova API ≥ 2.26). Tags can later be used in the auto-assignment of the backup policy.
Tags themselves are not part of the backup.
You can list tags for a specific instance in the OpenStack using this command:
Access Keys
During Inventory Synchronization, Storware Backup & Recovery scans all Keypairs (to which a user has access) and lists them as Access Keys. Access keys are not exported during backup. When restoring the instance, in the restore modal -> Advanced tab, the user can specify the Access Key (otherwise, the instance will be restored without one).
Flavors
During Inventory Synchronization, Storware Backup & Recovery scans all Flavors and saves their configuration. When restoring an instance, in the restore modal -> Advanced tab, the user can specify the flavor.
When restoring to a different OpenStack than the original instance was backed up, or the nova version is higher than 2.46 (where flavor ID cannot be fetched from the nova API) - you always need to specify in the restore modal -> Advanced tab which flavor should be used for recovery.
https://docs.openstack.org/api-ref/compute/?expanded=show-server-details-detail#show-server-details
Limitations
Storware Backup & Recovery does not backup and restores keypairs that user used in Storware Backup & Recovery doesn't have access to. The restored instance will have no keypairs assigned. In such a case, the keypairs have to be backed up and restored manually under the same name before restoring the instance.
For the libvirt strategy only, QCOW2/RAW files or Ceph RBD are supported as the backend.
The disk attachment method with Ceph requires access to the monitors from the Proxy VM.
General configuration
This section describes advanced setup. We recommend following this guide to configure more sophisticated environments. The wizard-based configuration (accessible from the dashboard) is suitable for simple PoC deployments, where the server and node are already installed as a proxy VM.
Go the Virtual Environments -> Virtualization Providers and click Create
Select OpenStack at the top
In the General tab:
specify Node configuration for the nodes communicating with the keystone
later you can override these settings on the Hypervisor level, but for now, select node config that is able to communicate with your OpenStack KeyStone
if using the disk attachment method, it has nodes residing in their corresponding Proxy VMs
URL - Keystone API URL, e.g.
https://10.201.32.40:5000/v3Region - provide the name of your region (each region must be added as a separate hypervisor manager
Choose import/export mode - this is backup strategy discussed in Backup Strategies section
Trust certificates - if you're using certificates that may not be trusted by you nodes - e.g. self-signed - you can enable this toggle. By default they will be imported automatically when connecting for the first time anyway.
In the OpenStack settings tab:
Endpoint interface type - interface type used to connect to the OpenStack services' endpoints returned from your Keystone
Download image from glance:
when enabled - this setting only applies when OS (boot) volume is either:
nova volume
cinder volume from image
the image will be downloaded only if the OS (boot) volume is not excluded.
this setting will download the OS image from glance instead of the OS (boot) volume
this means that changes applied to the OS (boot) volume will be discarded, and the original glance image (the one used to create the instance, and in the resulting backup) will be later used for recovery
this may be useful for a disk-attachment strategy (where nova volumes are not supported), and you still need to recover the instance to another OpenStack, where this image doesn't exist.
Use domain-scoped authorization - depending on your use case (required permissions described in Authentication Domains):
single credentials with the permission to access all OpenStack authentication domains and projects -> then this toggle should be disabled, and the credentials should be provided in the only Authentication Domain tab on the left available
separate Authentication domains - then this toggle should be enabled, and all authentication domains used for authentication should be defined on the left with (+) icon
In the Authentication domain tabs (you can use your OpenStack RC file to fill these fields):
Name - name of domain
DomainI ID - optional domain ID
User/Password - OpenStack user and password
Default project - name of default project in the domain being defined
Save and run first inventory sync - this will also detect hypervisors, which may require 2 things:
assigning different node configurations to specific hosts (this must correspond with your AZ setup
providing SSH credentials (when using SSH Transfer backup strategy)
For SSH Transfer backup strategy only:
make sure you provide the correct SSH credentials for each hypervisor listed on the Hypervisors tab. You can also use SSH public key authentication.
if libvirt runs inside the container, for each host, follow this KB article.
When you want to use the Ceph RBD variant in your backup strategy:
follow the Ceph RBD setup
make sure to have this setup done for all nodes that will need access to the Ceph monitors
Make sure that volumes have appeared in Storage -> Instances tab to confirm connectivity between node and Ceph monitors
for each Ceph RBD Storage (volume type), assign your Ceph storage provider and the appropriate Ceph storage pool in Virtual Environments -> Infrastructure -> your hypervisor manager details -> Storage -> volume type -> Ceph settings
When your environment uses an NFS storage backend, make sure to enable QCOW2 file support. Otherwise, backup snapshots will create RAW files (instead of QCOW2):
it's recommended to set these values in
/etc/cinder/cinder.conf:
Run both full and incremental backups to verify the setup.
Instant restore setup
Node configuration
First step to initiate the configuration process for OpenStack Instant Restore service, is creating a directory that will be accessible for mounting by the NFS server. Create a specific directory on your Node machine that will be used as the target space for Instant Restore's shared resources.
Next, create an NFS share that will allow access to the /vprotect_data/ directory from other machines in the network. Sharing this directory will enable OpenStack clients to use it for virtual machine restoration.
Cinder storage backend configuration
Paths and commands may vary depending on the version of OpenStack you are using.
After creating the NFS share on the node, you need to configure the NFS backend in the cinder service. This step will allow OpenStack to access resources stored on the node via NFS. Edit the /etc/cinder/cinder.conf file and add this section at the end of the file:
Create the file you provided in the configuration as the value of nfs_shares_config parameter
and path to the NFS share:
After creating the NFS server and configuring cinder, restart the cinder volume service. Please note that the name of this service may be different depending on the OpenStack version.
Matching nodes to the OpenStack storage backends (volume types)
After completing the inventory synchronization of the OpenStack, in Node edition window you can select the storage with the NFS backend configuration. Details about the NFS backend should be supplied by the OpenStack administrator.

You should now be able to select "Instant restore" for backed up virtual machines.
Last updated