3 Node Cluster

Overview

In this example, we have prepared 3 machines with Red Hat 9 operating system in the same network:

  • 10.1.1.2 vprotect1.local

  • 10.1.1.3 vprotect2.local

  • 10.1.1.4 vprotect3.local

We will use IP 10.1.1.5/24 (storware.local) for the floating IP of our cluster.

On each machine, we are going to install Storware Server and Node. Cluster will work in Active-Passive mode, which means Storware Services will be active only on a single host of the cluster.

Storware server installation

Run those steps on all machines under the pacemaker cluster.

circle-info
  • Please follow the Storware server installation, assuming the Red Hat Enterprise Linux 9 OS: Server

  • During installation, do not execute the command systemctl start vprotect-server

Steps:

  • Create a repository file for Storware Backup and Recovery-

  • Create a repository file for MariaDB

  • Server installation (for Red Hat 9)

  • Server configuration - do not execute systemctl start vprotect-server

Next, add firewall rules:

  1. Add redirection to allow the local node to communicate with the server on the cluster IP

  1. Open firewall for MariaDB replication:

Configuration custom SSL certificate

All steps run as the root user. All steps execute on the first node of the cluster. In our example, we are going to create a certificate for the domain storware.local

Follow the steps from enabling HTTPS connectivity for nodesarrow-up-right

Storware node installation

circle-info
  • Please follow the Storware node installation, assuming the Red Hat Enterprise Linux 9 OS: Node

  • During installation do not execute commands: systemctl start vprotect-node reboot vc node inst register

Steps:

  • Node installation (For Red Hat 9)

  • Node configuration - Only point 3 "Configure the operating system."

Backup destination configuration

For a multi-node/cluster environment for backup destination, we suggest using NFS, Object Storage, and Deduplication appliances. In this example, we use NFS. Execute on all Storware node machines.

  1. Add an entry in /etc/fstab for automount NFS

  1. Create directories for the mount NFS share:

  1. Mount the NFS share

  1. Create subdirectories for backup destinations (run only on a single node)

  1. Add privileges for newly created shares

Cluster Configuration

The cluster is controlled by a pacemaker.

Prepare the operating system

All steps run as the root user. Run those steps on all machines in the pacemaker cluster:

  1. Stop all services controlled by the cluster, and disable autostart.

  1. Configure DNS server, or the hosts file on each machine vi /etc/hosts

Set MariaDB replication

All steps run as the root user. Run on all cluster nodes:

  1. Create MariaDB user replication with password vPr0tect for replication:

  1. Add changes to /etc/my.cnf.d/server.cnf in the mysqld section:

  1. Add changes to /etc/my.cnf.d/server.cnf in the mysqld section: On vprotect1.local:

On vprotect2.local:

On vprotect3.local:

  1. Restart MariaDB service:

  1. On each host, show output from the command:

Output from vprotect3.local:

Output from vprotect1.local:

Output from vprotect2.local:

  1. Set replication on each MariaDB server: Execute on vprotect1.local:

Execute on vprotect2.local:

Execute on vprotect3.local:

  1. Start replication MariaDB: Execute on vprotect1.local:

Show output from command:

Wait until you see in the output:

Repeat the last step on host vprotect2.local and vprotect3.local.

Set the same passwords for the vprotect user in MariaDB

  1. Copy password from file /opt/vprotect/server/quarkus.properties

  1. Log in to MariaDB

  1. Set password for vprotect user:

  1. Copy configuration files from vprotect1.local to other cluster hosts

  1. Add permissions for copied files (execute on each host)

Configure pacemaker

All steps run as the root user.

Run on every node in the cluster

  1. Install pacemaker packages

  1. Create SSH keys, and add them to other hosts to know. Create SSH key on each host:

To add a certificate to know on another host: run on host1:

run on host2:

run on host3:

  1. Open ports on the firewall

  1. Start pcsd service

  1. Set the same password for the user hacluster

Run only on the first node of the cluster

  1. Authenticate nodes of the cluster (with hacluster user)

  1. Create cluster

  1. Run cluster

  1. Power off stonith

  1. Create a floating IP in the cluster

  1. Add vprotect-server to the cluster

  1. Register Storware node on server (run on all hosts) 7.1. Add the certificate to the trusted

7.2. Register node on server

  1. Add vprotect-node to the cluster

Useful commands to control the cluster:

For update, or service Storware unmanage services from cluster:

Back to manage:

Show status of cluster:

Stop cluster node:

Stop all nodes of the cluster:

Start all nodes of the cluster:

Clear old errors in cluster:

Last updated