Backup and Recovery
Backup and Recovery 7.1
Backup and Recovery 7.1
  • Storware Backup & Recovery documentation
    • Table of Contents
  • Changelog
  • Overview
    • Main Features
    • Storware Backup and Recovery concepts
      • Support Matrix
      • Architecture
      • Components
      • Backup types
      • Typical Scenarios
    • Licensing
    • Product Life Cycle
  • Deployment
    • Component requirements
    • Supported platforms requirements
    • Sizing Guide
      • Small
      • Medium
      • Large
    • Installation
      • ISO-based installation
      • Quick Installation using all-in-one script
      • Installation using Ansible playbook
      • Installation with RPMs
      • Deployment in Microsoft Azure
      • Virtual Appliance
        • RHV/oVirt/OLVM Virtual Appliance
        • Citrix Hypervisor | XCP-ng Virtual Appliance
        • VMware Virtual Appliance
        • Nutanix Acropolis Hypervisor (AHV)
    • Initial Configuration
    • Backup Destinations
      • File System
        • Synthetic File System
          • XFS
          • DD Boost
        • isoLayer (Synthetic)
        • File system
          • Virtual Data Optimizer (VDO)
        • Catalogic Software vStor
      • Deduplication Appliances
        • Dell EMC Data Domain
        • Huawei OceanProtect
        • HPE StoreOnce
        • Exagrid
        • Neverfail HybriStor
      • Object Storage
        • Alibaba Cloud OSS
        • AWS S3 or S3-compatible
        • Ceph Rados Gateway
        • Cloudian S3
        • Wasabi
        • Google Cloud Storage
        • IBM Cloud Object Storage
        • Microsoft Azure Blob Storage
        • Nutanix Objects
        • OpenStack SWIFT
        • Oracle Cloud Infrastructure Object Storage
        • Scality RING
      • Enterprise Backup Providers
        • Dell EMC Avamar
        • Dell EMC Networker
        • IBM Spectrum Protect
        • Micro Focus Data Protector
        • Veritas NetBackup
        • Rubrik Managed Volumes
      • Tape Pools
    • High Availability
      • 2 Node Cluster
      • 3 Node Cluster
    • Common tasks
      • Staging space configuration
      • Enabling HTTPS connectivity for nodes
      • LVM setup on Storware Backup & Recovery Node for disk attachment backup mode
      • Full versions of libvirt/qemu packages installation
      • SSH public key authentication
      • Enabling HTTP(S) Proxy for Storware Backup & Recovery
  • Protecting Virtual Environments
    • Virtual Machines
      • VMware vSphere/ESXi
      • Microsoft Hyper-V
      • Azure Stack HCI
      • Nutanix Acropolis Hypervisor (AHV)
      • Red Hat Openshift Virtualization
      • Red Hat Virtualization
      • oVirt
      • Oracle Linux Virtualization Manager
      • Oracle VM
      • Proxmox VE
      • KVM/Xen
      • OpenStack
      • OpenNebula
      • Virtuozzo
      • Verge
      • Citrix Hypervisor (XenServer)
      • XCP-ng
      • Huawei FusionCompute
      • HPE SimpliVity
      • SC//Platform
    • Cloud
      • Amazon EC2
      • GCP GCE
      • Azure Cloud
    • Containers
      • Kubernetes
      • Red Hat OpenShift
      • Proxmox VE
    • Backup & Restore
    • v2v migration
  • Protecting Microsoft 365
    • Microsoft 365 organization management
      • Configure Microsoft 365 access
      • Add Microsoft 365 organization manually
      • Add Microsoft 365 organization using the Setup Assistant
      • Account auto-synchronization
    • Backup & Restore
    • Suppoted Sharepoint templates, and limitations
  • File Level Backup and Restore - OS Agent
  • Protecting Applications
    • Applications
      • MSSQL
      • MySQL/MariaDB
      • PostgreSQL
      • DB2
      • Oracle
      • Relax and Recover - ReaR
      • Git
      • oVirt/RHV/OLVM
      • Kubernetes/OpenShift etcd
    • Backup & Restore
  • Protecting Storage Providers
    • Storage Providers
      • Ceph RBD
      • Nutanix Files
      • Nutanix Volume Groups
    • Backup & Restore
  • Administration
    • Dashboard
    • Virtual Environments
      • Instances
        • Backup on-demand
        • Restore on-demand
        • Snapshot Management
      • Virtualization Providers
      • Backup SLAs
        • Policies
        • Schedules
      • Snapshot SLAs
        • Policies
        • Schedules
      • Recovery Plans
        • Policies
        • Schedules
      • Mounted Backups (File-level Restore)
    • Storage
      • Instances
        • Backup on-demand
        • Restore on-demand
      • Infrastructure
      • Backup SLAs
        • Policies
        • Schedules
      • Snapshot SLAs
        • Policies
        • Schedules
      • Mounted Backups (File-level Restore)
    • Cloud
      • Instances
      • Service Providers
      • Backup SLAs
        • Policies
        • Schedules
      • Download
    • Applications
      • Instances
      • Execution Configurations
      • Backup SLAs
    • Endpoints
      • Environment
      • Administrators
      • Endpoints Server Management
        • Dashboard
        • Packages
        • Organizations
      • Endpoints Administrator
        • Dashboard
        • Users
          • Local users
          • LDAP users
        • Devices
          • Devices list view
          • Device status
        • Backup SLA
          • Create a Backup SLA
            • GENERAL
            • WINDOWS
            • MAC OS (technical preview)
            • EMAIL CLIENTS
          • Backup SLA management
          • Backup SLA removal
        • Restore Jobs
        • Client Deployments
    • Reporting
      • Virtual Environments
      • Storage
      • Microsoft 365
      • Applications
      • Notifications
      • Audit Log
    • Nodes
      • Instances
      • Node Configurations
    • Access Management
      • Users
      • Groups
      • Roles
      • OS Credentials
    • Settings
      • Global Settings
      • Internal DB Backup
      • Notification Rules
      • Mailing Lists
      • Endpoints Global Settings
    • Upgrade
    • CLI Reference
    • CLI v2 Reference
  • Integration
  • Integration Plugins
    • Red Hat Virtualization UI Plugin
    • oVirt UI Plugin
    • Oracle Linux Virtualization Manager UI Plugin
    • OpenStack UI Plugin
  • Troubleshooting
    • Enable DEBUG mode in Storware Backup and Recovery
    • Collecting logs
    • External log targets
    • Disaster Recovery
  • Known software issues and limitations
  • Glossary
Powered by GitBook
On this page
  • Overview
  • 1. Storware server installation
  • 2. Configuration custom SSL certificate
  • 3. Storware node installation
  • 4. Backup destination configuration
  • 5. Cluster Configuration
  • 5.1 Prepare operating system
  • 5.2 Set MariaDB replication
  • 5.3 Configure pacemaker
  • 6. Register Storware node on server (on all hosts)
  • 7. Useful commands to control cluster:
  1. Deployment
  2. High Availability

3 Node Cluster

Overview

We have prepared 3 machines with RedHat 8 operating system in the same network:

10.1.1.2 vprotect1.local 10.1.1.3 vprotect2.local 10.1.1.4 vprotect3.local

We will use IP 10.1.1.5/23 for floating IP of our cluster.

1. Storware server installation

Run that steps on all machines under pacemaker cluster:

  1. Add Storware repository

    vi /etc/yum.repos.d/vProtect.repo
    # Storware Backup & Recovery - Enterprise backup solution for virtual environments repository
    [vprotect]
    name = vProtect
    baseurl = https://repo.storware.eu/storware/current/el8/
    gpgcheck = 0
  2. Add MariaDB repository

    vi /etc/yum.repos.d/MariaDB.repo
    # MariaDB 10.10 RedHatEnterpriseLinux repository list - created 2023-08-23 08:49 UTC
    # https://mariadb.org/download/
    [mariadb]
    name = MariaDB
    # rpm.mariadb.org is a dynamic mirror if your preferred mirror goes offline. See https://mariadb.org/mirrorbits/ for details.
    # baseurl = https://rpm.mariadb.org/10.10/rhel/$releasever/$basearch
    baseurl = https://mirror.creoline.net/mariadb/yum/10.10/rhel/$releasever/$basearch
    # gpgkey = https://rpm.mariadb.org/RPM-GPG-KEY-MariaDB
    gpgkey = https://mirror.creoline.net/mariadb/yum/RPM-GPG-KEY-MariaDB
    gpgcheck = 1
  3. Install Storware server

    dnf install -y vprotect-server
  4. Initialize Storware server

    vprotect-server-configure
  5. Redirect 8181 port to 443 on firewall

    /opt/vprotect/scripts/ssl_port_forwarding_firewall-cmd.sh
  6. Add redirection to allow local node to communicate with server on cluster IP

    firewall-cmd --permanent --direct --add-rule ipv4 nat OUTPUT 0 -p tcp -o lo --dport 443 -j REDIRECT --to-ports 8181
    firewall-cmd --complete-reload
  7. Open firewall for MariaDB replication:

    firewall-cmd --add-port=3306/tcp --permanent
    firewall-cmd --complete-reload

2. Configuration custom SSL certificate

All steps run as root user. All steps execute on first node of cluster.

3. Storware node installation

Execute on all pacemaker nodes, and other Storware node machines.

  1. Add Storware repository

    vi /etc/yum.repos.d/vProtect.repo
    # Storware Backup & Recovery - Enterprise backup solution for virtual environments repository
    [vprotect]
    name = vProtect
    baseurl = https://repo.storware.eu/storware/current/el8/
    gpgcheck = 0
  2. Install Storware node

    dnf install -y vprotect-node
  3. Initialize Storware node

    vprotect-node-configure
  4. Only when we want backup Proxmox by export strategy.

cd /opt/vprotect/scripts/vma
./setup_vma.sh vprotect-vma-20180128.tar

4. Backup destination configuration

For multi-node/cluster environment for backup destination we suggest to use NFS, Object Storage, Deduplication appliances. In this example we use NFS.

Execute on all Storware node machines.

  1. Add entry in /etc/fstab for automount NFS

    10.1.1.1:/vprotect /vprotect_data nfs defaults 0 0
  2. Create directories for mount NFS share:

    mkdir /vprotect_data
  3. Mount NFS share

    mount -a
  4. Create subdirectories for backup destinations (run only on single node)

    mkdir /vprotect_data/backup
    mkdir /vprotect_data/backup/synthetic
    mkdir /vprotect_data/backup/filesystem
    mkdir /vprotect_data/backup/dbbackup
  5. Add privileges for newly created shares

    chown vprotect:vprotect -R /vprotect_data

5. Cluster Configuration

Cluster is controlled by pacemaker.

5.1 Prepare operating system

All steps run as root user. Run that steps on all machines in pacemaker cluster:

  1. Stop all services controlled by cluster, and disable autostart.

    systemctl stop vprotect-node
    systemctl stop vprotect-server
    systemctl disable vprotect-node
    systemctl disable vprotect-server

5.2 Set MariaDB replication

All steps run as root user. Run on all cluster nodes:

  1. Create MariaDB user replication with password vPr0tect for replication:

    CREATE USER replicator@'10.1.1.%' IDENTIFIED BY 'vPr0tect';
    GRANT SELECT,REPLICATION SLAVE,REPLICATION CLIENT ON *.* to replicator@'10.1.1.%' IDENTIFIED BY 'vPr0tect';
  2. Add changes to /etc/my.cnf.d/server.cnf in mysqld section:

    [mysqld]
    lower_case_table_names=1
    log-bin=mysql-bin
    relay-log=relay-bin
    log-slave-updates
    max_allowed_packet=500M
    log_bin_trust_function_creators=1
  3. Add changes to /etc/my.cnf.d/server.cnf in mysqld section:

    On vprotect1.local:

    server-id=10

    On vprotect2.local:

    server-id=20

    On vprotect3.local:

    server-id=30
  4. Restart MariaDB service:

    systemctl restart mariadb
  5. On each host show output from command:

    SHOW MASTER STATUS;

    Output from vprotect3.local:

    +------------------+----------+--------------+------------------+
    | File             | Position | Binlog_Do_DB | Binlog_Ignore_DB |
    +------------------+----------+--------------+------------------+
    | mysql-bin.000006 |      374 |              |                  |
    +------------------+----------+--------------+------------------+
    1 row in set (0.000 sec)

    Output from vprotect1.local:

    +------------------+----------+--------------+------------------+
    | File             | Position | Binlog_Do_DB | Binlog_Ignore_DB |
    +------------------+----------+--------------+------------------+
    | mysql-bin.000007 |      358 |              |                  |
    +------------------+----------+--------------+------------------+
    1 row in set (0.000 sec)

    Output from vprotect2.local:

    +------------------+----------+--------------+------------------+
    | File             | Position | Binlog_Do_DB | Binlog_Ignore_DB |
    +------------------+----------+--------------+------------------+
    | mysql-bin.000004 |      358 |              |                  |
    +------------------+----------+--------------+------------------+
    1 row in set (0.000 sec)
  6. Set replication on each MariaDB server:

    Execute on vprotect1.local:

    CHANGE MASTER TO
    MASTER_HOST='10.1.1.4',
    MASTER_PORT=3306,
    MASTER_USER='replicator',
    MASTER_PASSWORD='vPr0tect',
    MASTER_LOG_FILE='mysql-bin.000006',
    MASTER_LOG_POS=37;

    Execute on vprotect2.local:

    CHANGE MASTER TO
    MASTER_HOST='10.1.1.2',
    MASTER_PORT=3306,
    MASTER_USER='replicator',
    MASTER_PASSWORD='vPr0tect',
    MASTER_LOG_FILE='mysql-bin.000007',
    MASTER_LOG_POS=358;

    Execute on vprotect3.local:

    CHANGE MASTER TO
    MASTER_HOST='10.1.1.3',
    MASTER_PORT=3306,
    MASTER_USER='replicator',
    MASTER_PASSWORD='vPr0tect',
    MASTER_LOG_FILE='mysql-bin.000004',
    MASTER_LOG_POS=358;
  7. Start replication MariaDB: Execute on vprotect1.local:

    START SLAVE;

    Show output from command:

    SHOW SLAVE STATUS\G

    Wait until you see in output:

    Slave_IO_Running: Yes
    Slave_SQL_Running: Yes

    Repeat last step on host vprotect2.local and vprotect3.local.

5.2.1 Make same passwords for vprotect user in MariaDB

Execute only on first node of cluster.

  1. Copy password from file /opt/vprotect/payara.properties

    eu.storware.vprotect.db.password=SECRETPASSWORD
  2. Log in to MariaDB

    mysql -u root -p
  3. Set password for vprotect user:

    SET PASSWORD FOR 'vprotect'@'localhost' = PASSWORD('SECRETPASSWORD');
    quit;
  4. Copy configuration files from vprotect1.local to other cluster hosts

    cd /opt/vprotect/
    keystore.jks
    log4j2-server.xml
    payara.properties
    vprotect.env
    vprotect-keystore.jks
    license.key
  5. Add permissions for copied files

    chown vprotect:vprotect -R /opt/vprotect/

5.3 Configure pacemaker

All steps run as root user.

5.3.1 Run on every node in cluster

  1. Install pacemaker packages

    dnf install -y pcs pacemaker fence-agents-all
  2. Create SSH keys, and add them on other hosts to known.

  3. Open ports on firewall

    firewall-cmd --permanent --add-service=high-availability
    firewall-cmd --reload
  4. Start pcsd service

    systemctl start pcsd.service
    systemctl enable pcsd.service
  5. Set same password for user hacluster

    passwd hacluster

5.3.2 Run only on first node of cluster

  1. Authenticate nodes of cluster

    pcs host auth vprotect1.local vprotect2.local vprotect3.local
  2. Create cluster

    pcs cluster setup vp vprotect1.local vprotect2.local vprotect3.local
  3. Run cluster

    pcs cluster start --all
  4. Power off stonith

    pcs property set stonith-enabled=false
  5. Create floating IP in cluster

    pcs resource create vp-vip1 IPaddr2 ip=10.1.1.4 cidr_netmask=23 --group vpgrp
  6. Add vprotect-server to cluster

    pcs resource create "vp-vprotect-server.service" systemd:vprotect-server.service op start on-fail="stop" timeout="300s" op stop timeout="300s" op monitor timeout="300s" --group vpgrp
  7. Add vprotect-node to cluster

    pcs resource create "vp-vprotect-node.service" systemd:vprotect-node.service op start on-fail="stop" timeout="300s" op stop timeout="300s" op monitor timeout="300s" --group vpgrp

6. Register Storware node on server (on all hosts)

  1. Add certificate to trusted

    /opt/vprotect/scripts/node_add_ssl_cert.sh 10.1.1.5 443
  2. Register node on server

    vprotect node -r ${HOSTNAME%%.*} admin https://10.1.1.5:443/api

7. Useful commands to control cluster:

For update, or service Storware unmanage services from cluster:

pcs resource unmanage vpgrp

Back to manage:

pcs resource manage vpgrp

Show status of cluster:

pcs status

Stop cluster node:

pcs cluster stop vprotect1.local

Stop all nodes of cluster:

pcs cluster stop --all

Start all nodes of cluster:

pcs cluster start --all

Clear old errors in cluster:

pcs resource cleanup
Previous2 Node ClusterNextCommon tasks

Follow steps from

.

enabling HTTPS connectivity for nodes
Configure LVM filter