Supported platforms requirements

VMware vCenter/ESXi

Connection URL: https://VCENTER_HOST or https://ESXI_HOST

Source
Destination
Ports
Description

Node

VMware vCenter/ESXi

443/tcp

API access

Node

VMware ESXi

902/tcp

HotAdd/NBD

Node

VMware vCenter/ESXi

445/tcp

Instant Restore (NFS)

Nutanix AHV

Disk attachment

Connection URL: https://PRISM_HOST:9440/api/nutanix/v3 (Prism Central or Prism Elements)

circle-info

Note: when connecting via Prism Central, the same credentials will be used to access all Prism Elements

Source
Destination
Ports
Description

Node

Prism Elements (and optionally Prism Central if used)

9440/tcp

API access to the Nutanix manager

OpenStack

Disk attachment

Connection URL: https://KEYSTONE_HOST:5000/v3

Source
Destination
Ports
Description

Node

Keystone, Nova, Glance, Cinder

ports that were defined in endpoints for OpenStack services

API access to the OpenStack management services - using endpoint type that has been specified in hypervisor manager details

Node

Ceph monitors

3300/tcp, 6789/tcp

if Ceph RBD is used as the backend storage - used to collect changed-blocks lists from Ceph

SSH transfer

Connection URL: https://KEYSTONE_HOST:5000/v3

circle-info

Note: you also must provide SSH credentials to all hypervisors that have been detected during inventory sync

Source
Destination
Ports
Description

Node

Hypervisor

22/tcp

SSH access

Hypervisor

Node

netcat port range defined in node configuration - by default 16000-16999/tcp

optional netcat access for data transfer

Node

Ceph monitors

3300/tcp, 6789/tcp, 10809/tcp

if Ceph RBD is used as the backend storage - used for data transfer over NBD

Virtuozzo

SSH transfer

Connection URL: https://KEYSTONE_HOST:5000/v3

circle-info

Note: you also must provide SSH credentials to all hypervisors that have been detected during inventory sync

Source
Destination
Ports
Description

Node

Hypervisor

22/tcp

SSH access

Hypervisor

Node

netcat port range defined in node configuration - by default 16000-16999/tcp

optional netcat access for data transfer

Node

Ceph monitors

3300/tcp, 6789/tcp, 10809/tcp

if Ceph RBD is used as the backend storage - used for data transfer over NBD

OpenNebula

Disk attachment

Connection URL: https://MANAGER_HOST

Source
Destination
Ports
Description

Node

Manager Host

XML-RPC API port - 2633/tcp by default

API access to the OpenNebula management services

oVirt/RHV/OLVM

Disk attachment

Connection URL: https://MANAGER_HOST/ovirt-engine/api

Source
Destination
Ports
Description

Node

oVirt/RHV/OLVM manager

443/tcp

oVirt/RHV/OLVM API access

Disk Image Transfer

Connection URL: https://MANAGER_HOST/ovirt-engine/api

Source
Destination
Ports
Description

Node

oVirt/RHV/OLVM manager

443/tcp

oVirt/RHV/OLVM API access

Node

oVirt/RHV/OLVM hypervisor

54322/tcp

oVirt/RHV/OLVM ImageIO services - for data transfer (primary source)

Node

oVirt/RHV/OLVM manager

54323/tcp

oVirt/RHV/OLVM ImageIO services - for data transfer (fallback to ImageIO Proxy)

SSH Transfer

Connection URL: https://MANAGER_HOST/ovirt-engine/api

circle-info

Note: you also must provide SSH credentials to all hypervisors that have been detected during inventory sync

Source
Destination
Ports
Description

Node

oVirt/RHV/OLVM manager

443/tcp

oVirt/RHV/OLVM API access

Node

oVirt/RHV/OLVM hypervisor

22/tcp

SSH access for data transfer

oVirt/RHV/OLVM hypervisor

Node

netcat port range defined in node configuration - by default 16000-16999/tcp

optional netcat access for data transfer

Change-Block Tracking

Connection URL: https://MANAGER_HOST/ovirt-engine/api

Source
Destination
Ports
Description

Node

oVirt/RHV/OLVM manager

443/tcp

oVirt/RHV/OLVM API access

Node

oVirt/RHV/OLVM hypervisor

54322/tcp

oVirt/RHV/OLVM ImageIO services - for data transfer

Node

oVirt/RHV/OLVM manager

54323/tcp

oVirt/RHV/OLVM ImageIO services - for data transfer

Citrix XenServer/xcp-ng

circle-info

Note: all hosts in the pool must be defined

Single image (XVA-based)

Source
Destination
Ports
Description

Node

Hypervisor

443/tcp

API access (for data transfer management IP is used, unless transfer NIC parameter is configured in hypervisor details)

Changed-Block Tracking

Source
Destination
Ports
Description

Node

Hypervisor

443/tcp

API access (for data transfer management IP is used, unless transfer NIC parameter is configured in hypervisor details)

Node

Hypervisor

10809/tcp

NBD access (data transfer IP is returned by hypervisor)

Proxmox VE

Export storage repository

Source
Destination
Ports
Description

Node

Hypervisor

22/tcp

SSH access

Hypervisor

Node

If Node is hosting staging space: 111/tcp, 111/UDP, 2049/tcp, 2049/UDP, ports specified in /etc/sysconfig/nfs - variables MOUNTD_PORT (TCP and UDP), STATD_PORT (TCP and UDP), LOCKD_TCPPORT (TCP), LOCKD_UDPPORT (UDP), otherwise please check the documentation of your NFS storage provider

if staging space (export storage domain) is hosted on the Node - NFS access

Node and hypervisor

shared NFS storage

check the documentation of your NFS storage provider

if staging space (export storage domain) is hosted on the shared storage - NFS access

Kubernetes

Connection URL: https://API_HOST:6443

Source
Destination
Ports
Description

Node

Kubernetes API host

22/tcp

SSH access

Node

Kubernetes API host

6443/tcp

API access

Openshift

Connection URL: https://API_HOST:6443

Source
Destination
Ports
Description

Node

Kubernetes API host

6443/tcp

API access

Node

Openshift Workers

2049/tcp, 2049/udp

NFS connection

Openshift Workers

Node

2049/tcp, 2049/udp

NFS connection

Node

Openshift Workers

30000-32767/tcp

access to service exposed by Storware Backup & Recovery plugin

Hyper-V

Source
Destination
Ports
Description

Node

Storware Backup & Recovery Agent

50881/tcp for http connection, 50882/tcp for https connection

Storware Backup & Recovery Agent access and data transfer, firewall rules are added automatically during agent installation

Azure Stack HCI

Source
Destination
Ports
Description

Node

Storware Backup & Recovery Agent

50881/tcp for http connection, 50882/tcp for https connection

Storware Backup & Recovery Agent access and data transfer, firewall rules are added automatically during agent installation

SC//Platform

Export Storage Domain

Connection URL: https://MANAGER_HOST

Source
Destination
Ports
Description

Node

SC//Platform manager

443/tcp

API access

Node

SC//Platform hosts

445/tcp

SMB transfer

SC//Platform hosts

Node

445/tcp

SMB transfer

Disk Attachment

Connection URL: https://MANAGER_HOST

Source
Destination
Ports
Description

Node

SC//Platform manager

443/tcp

API access

Microsoft 365

Source
Destination
Ports
Description

Node

Microsoft 365

443/tcp

Microsoft 365 API access

You can find more detailed descriptions of Office 365 URLs and IP address ranges on this pagearrow-up-right.

To successfully synchronize M365 user account, it must fulfill following requirements:

  • has an email,

  • is not filtered by location, country or office location (user filter in UI),

  • field user type is set to Member,

  • has a license or is a shared mailbox.

OS Agent

Source
Destination
Ports
Description

OS Agent

Node

15900/tcp

Node - OS Agent communication

Security Requirements

User Permissions

User vprotect must be a member of group "disk".

Sudo privileges are required for the following commands:

Storware Backup & Recovery Node:

  • /usr/bin/targetcli

  • /usr/sbin/exportfs

  • /usr/sbin/kpartx

  • /usr/sbin/dmsetup

  • /usr/bin/qemu-nbd

  • /usr/bin/guestmount

  • /usr/bin/fusermount

  • /bin/mount

  • /bin/umount

  • /usr/sbin/parted

  • /usr/sbin/nbd-client

  • /usr/bin/tee

  • /opt/vprotect/scripts/vs/privileged.sh

  • /usr/bin/yum

  • /usr/sbin/mkfs.xfs

  • /usr/sbin/fstrim

  • /usr/sbin/xfs_growfs

  • /usr/bin/docker

  • /usr/bin/rbd

  • /usr/bin/chown

  • /usr/sbin/nvme

  • /bin/cp

  • /sbin/depmod

  • /usr/sbin/modprobe

  • /bin/bash

  • /usr/local/sbin/nbd-client

  • /bin/make

Storware Backup & Recovery server:

  • /opt/vprotect/scripts/application/vp_license.sh

  • /bin/umount

  • /bin/mount

SELinux

PERMISSIVE - currently it interferes with the mountable backups (file-level restore) mechanism. Optionally can be changed to ENFORCING if the file-level restore is not required.

Last updated