# 3 Node Cluster

### Overview

In this example, we have prepared 3 machines with Red Hat 9 operating system in the same network:

* `10.1.1.2 vprotect1.local`
* `10.1.1.3 vprotect2.local`
* `10.1.1.4 vprotect3.local`

We will use IP `10.1.1.5/24` (`storware.local`) for the **floating IP** of our cluster.

**On each machine**, we are going to install Storware **Server and Node.** Cluster will work in Active-Passive mode, which means Storware Services will be **active only on a single host** of the cluster.

## Storware server installation

Run those steps on **all machines** under the pacemaker cluster.

{% hint style="info" %}

* Please follow the Storware server installation, assuming the Red Hat Enterprise Linux 9 OS: [#server](https://docs.storware.eu/installation/installation-with-rpms#server "mention")
* During installation, **do not execute** the command `systemctl start vprotect-server`&#x20;
  {% endhint %}

Steps:

* Create a repository file for Storware Backup and Recovery-
* Create a repository file for MariaDB
* Server installation (for Red Hat 9)
* Server configuration - do not execute `systemctl start vprotect-server`

Next, add firewall rules:

1. Add redirection to allow the local node to communicate with the server on the cluster IP

```
firewall-cmd --permanent --direct --add-rule ipv4 nat OUTPUT 0 -p tcp -o lo --dport 443 -j REDIRECT --to-ports 8181
firewall-cmd --complete-reload
```

2. Open firewall for MariaDB replication:

```
firewall-cmd --add-port=3306/tcp --permanent
firewall-cmd --complete-reload
```

## Configuration custom SSL certificate

All steps run as the root user. All steps execute on the first node of the cluster. In our example, we are going to create a certificate for the domain `storware.local`

Follow the steps from [enabling HTTPS connectivity for nodes](https://docs.storware.eu/deployment/common-tasks/enabling-https-connectivity-for-nodes)

## Storware node installation

{% hint style="info" %}

* Please follow the Storware node installation, assuming the Red Hat Enterprise Linux 9 OS:  [#node](https://docs.storware.eu/installation/installation-with-rpms#node "mention")
* During installation **do not execute** commands: `systemctl start vprotect-node` `reboot` `vc node inst register`
  {% endhint %}

Steps:

* Node installation (For Red Hat 9)
* Node configuration - Only point 3 "Configure the operating system."

## Backup destination configuration

For a multi-node/cluster environment for backup destination, we suggest using NFS, Object Storage, and Deduplication appliances. In this example, we use NFS. Execute on all Storware node machines.

1. Add an entry in /etc/fstab for automount NFS

```
10.1.1.1:/vprotect /vprotect_data nfs defaults 0 0
```

2. Create directories for the mount NFS share:

```
mkdir /vprotect_data
```

3. Mount the NFS share

```
mount -a
```

4. Create subdirectories for backup destinations (run only on a single node)

```
mkdir /vprotect_data/backup
mkdir /vprotect_data/backup/synthetic
mkdir /vprotect_data/backup/filesystem
mkdir /vprotect_data/backup/dbbackup
```

5. Add privileges for newly created shares

```
chown vprotect:vprotect -R /vprotect_data
```

## Cluster Configuration

The cluster is controlled by a pacemaker.

### Prepare the operating system

All steps run as the root user. Run those steps on all machines in the pacemaker cluster:

1. Stop all services controlled by the cluster, and disable autostart.

```
systemctl stop vprotect-node
systemctl stop vprotect-server
systemctl disable vprotect-node
systemctl disable vprotect-server
```

2. Configure DNS server, or the hosts file on each machine vi /etc/hosts

```
10.1.1.2 vprotect1.local
10.1.1.3 vprotect2.local
10.1.1.4 vprotect3.local
10.1.1.5 storware.local
```

### Set MariaDB replication

All steps run as the root user. Run on all cluster nodes:

1. Create MariaDB user `replication` with password `vPr0tect` for replication:

```
CREATE USER replicator@'10.1.1.%' IDENTIFIED BY 'vPr0tect';
GRANT SELECT,REPLICATION SLAVE,REPLICATION CLIENT ON *.* to replicator@'10.1.1.%' IDENTIFIED BY 'vPr0tect';
```

2. Add changes to `/etc/my.cnf.d/server.cnf` in the mysqld section:

```
[mysqld]
lower_case_table_names=1
log-bin=mysql-bin
relay-log=relay-bin
log-slave-updates
max_allowed_packet=500M
log_bin_trust_function_creators=1
```

3. Add changes to `/etc/my.cnf.d/server.cnf` in the mysqld section: On vprotect1.local:

```
server-id=10
```

On vprotect2.local:

```
server-id=20
```

On vprotect3.local:

```
server-id=30
```

4. Restart MariaDB service:

```
systemctl restart mariadb
```

5. On each host, show output from the command:

```
SHOW MASTER STATUS;
```

Output from vprotect3.local:

```
MariaDB [(none)]> SHOW MASTER STATUS;
+------------------+----------+--------------+------------------+
| File             | Position | Binlog_Do_DB | Binlog_Ignore_DB |
+------------------+----------+--------------+------------------+
| mysql-bin.000001 |      328 |              |                  |
+------------------+----------+--------------+------------------+
1 row in set (0.000 sec)
```

Output from vprotect1.local:

```
MariaDB [(none)]> SHOW MASTER STATUS;
+------------------+----------+--------------+------------------+
| File             | Position | Binlog_Do_DB | Binlog_Ignore_DB |
+------------------+----------+--------------+------------------+
| mysql-bin.000001 |      328 |              |                  |
+------------------+----------+--------------+------------------+
1 row in set (0.000 sec)
```

Output from vprotect2.local:

```
MariaDB [(none)]> SHOW MASTER STATUS;
+------------------+----------+--------------+------------------+
| File             | Position | Binlog_Do_DB | Binlog_Ignore_DB |
+------------------+----------+--------------+------------------+
| mysql-bin.000001 |      328 |              |                  |
+------------------+----------+--------------+------------------+
1 row in set (0.000 sec)
```

6. Set replication on each MariaDB server: Execute on vprotect1.local:

```
CHANGE MASTER TO
MASTER_HOST='10.1.1.4',
MASTER_PORT=3306,
MASTER_USER='replicator',
MASTER_PASSWORD='vPr0tect',
MASTER_LOG_FILE='mysql-bin.000001',
MASTER_LOG_POS=328;
```

Execute on vprotect2.local:

```
CHANGE MASTER TO
MASTER_HOST='10.1.1.2',
MASTER_PORT=3306,
MASTER_USER='replicator',
MASTER_PASSWORD='vPr0tect',
MASTER_LOG_FILE='mysql-bin.000001',
MASTER_LOG_POS=328;
```

Execute on vprotect3.local:

```
CHANGE MASTER TO
MASTER_HOST='10.1.1.3',
MASTER_PORT=3306,
MASTER_USER='replicator',
MASTER_PASSWORD='vPr0tect',
MASTER_LOG_FILE='mysql-bin.000001',
MASTER_LOG_POS=328;
```

7. Start replication MariaDB: Execute on vprotect1.local:

```
START SLAVE;
```

Show output from command:

```
SHOW SLAVE STATUS\G
```

Wait until you see in the output:

```
Slave_IO_Running: Yes
Slave_SQL_Running: Yes
```

Repeat the last step on host vprotect2.local and vprotect3.local.

#### Set the same passwords for the vprotect user in MariaDB

```info
Execute only on first node of cluster.
```

1. Copy password from file `/opt/vprotect/server/quarkus.properties`

```
quarkus.datasource.password=SECRETPASSWORD
```

2. Log in to MariaDB

```
mysql -u root -p
```

3. Set password for vprotect user:

```
SET PASSWORD FOR 'vprotect'@'localhost' = PASSWORD('SECRETPASSWORD');
quit;
```

4. Copy configuration files from vprotect1.local to other cluster hosts

```
cd /opt/vprotect/server/
keystore.jks
log4j2-server.xml
quarkus.properties
vprotect.env
vprotect-keystore.jks
license.key
```

5. Add permissions for copied files (execute on each host)

```
chown vprotect:vprotect -R /opt/vprotect/
```

### Configure pacemaker

All steps run as the root user.

#### Run on every node in the cluster

1. Install pacemaker packages

```
dnf --enablerepo=highavailability -y install pacemaker pcs
```

2. Create SSH keys, and add them to other hosts to know. Create SSH key on each host:

```
ssh-keygen
```

To add a certificate to know on another host: run on host1:

```
ssh-copy-id 10.1.1.3
ssh-copy-id 10.1.1.4
```

run on host2:

```
ssh-copy-id 10.1.1.2
ssh-copy-id 10.1.1.4
```

run on host3:

```
ssh-copy-id 10.1.1.2
ssh-copy-id 10.1.1.3
```

3. Open ports on the firewall

```
firewall-cmd --permanent --add-service=high-availability
firewall-cmd --reload
```

4. Start pcsd service

```
systemctl start pcsd.service
systemctl enable pcsd.service
```

5. Set the same password for the user hacluster

```
passwd hacluster
```

#### Run only on the **first** node of the cluster

1. Authenticate nodes of the cluster (with hacluster user)

```
pcs host auth vprotect1.local vprotect2.local vprotect3.local
```

2. Create cluster

```
pcs cluster setup vp vprotect1.local vprotect2.local vprotect3.local
```

3. Run cluster

```
pcs cluster start --all
```

4. Power off stonith

```
pcs property set stonith-enabled=false
```

5. Create a floating IP in the cluster

```
pcs resource create vp-vip1 IPaddr2 ip=10.1.1.5 cidr_netmask=24 --group vpgrp
```

6. Add vprotect-server to the cluster

```
pcs resource create "vp-vprotect-server.service" systemd:vprotect-server.service op start on-fail="stop" timeout="300s" op stop timeout="300s" op monitor timeout="300s" --group vpgrp
```

7. Register Storware node on server (run on all hosts) 7.1. Add the certificate to the trusted

```
/opt/vprotect/node/scripts/node_add_ssl_cert.sh 10.1.1.5 443
```

7.2. Register node on server

```
vc node inst register --name=${HOSTNAME%%.*} --login=admin --password=vPr0tect --apiurl=https://storware.local:443/api
```

8. Add vprotect-node to the cluster

```
pcs resource create "vp-vprotect-node.service" systemd:vprotect-node.service op start on-fail="stop" timeout="300s" op stop timeout="300s" op monitor timeout="300s" --group vpgrp
```

## Useful commands to control the cluster:&#x20;

For update, or service Storware unmanage services from cluster:

```
pcs resource unmanage vpgrp
```

Back to manage:

```
pcs resource manage vpgrp
```

Show status of cluster:

```
pcs status
```

Stop cluster node:

```
pcs cluster stop vprotect1.local
```

Stop all nodes of the cluster:

```
pcs cluster stop --all
```

Start all nodes of the cluster:

```
pcs cluster start --all
```

Clear old errors in cluster:

```
pcs resource cleanup
```
