# 3 Node Cluster

## Overview

We have prepared 3 machines with RedHat 8 operating system in the same network:

10.1.1.2 vprotect1.local\
10.1.1.3 vprotect2.local\
10.1.1.4 vprotect3.local

We will use IP 10.1.1.5/23 for floating IP of our cluster.

## 1. Storware server installation

Run that steps on all machines under pacemaker cluster:

1. Add Storware repository

   ```
   vi /etc/yum.repos.d/vProtect.repo
   ```

   ```
   # Storware Backup & Recovery - Enterprise backup solution for virtual environments repository
   [vprotect]
   name = vProtect
   baseurl = https://repo.storware.eu/storware/current/el8/
   gpgcheck = 0
   ```
2. Add MariaDB repository

   ```
   vi /etc/yum.repos.d/MariaDB.repo
   ```

   ```
   # MariaDB 10.10 RedHatEnterpriseLinux repository list - created 2023-08-23 08:49 UTC
   # https://mariadb.org/download/
   [mariadb]
   name = MariaDB
   # rpm.mariadb.org is a dynamic mirror if your preferred mirror goes offline. See https://mariadb.org/mirrorbits/ for details.
   # baseurl = https://rpm.mariadb.org/10.10/rhel/$releasever/$basearch
   baseurl = https://mirror.creoline.net/mariadb/yum/10.10/rhel/$releasever/$basearch
   # gpgkey = https://rpm.mariadb.org/RPM-GPG-KEY-MariaDB
   gpgkey = https://mirror.creoline.net/mariadb/yum/RPM-GPG-KEY-MariaDB
   gpgcheck = 1
   ```
3. Install Storware server

   ```
   dnf install -y vprotect-server
   ```
4. Initialize Storware server

   ```
   vprotect-server-configure
   ```
5. Redirect 8181 port to 443 on firewall

   ```
   /opt/vprotect/scripts/ssl_port_forwarding_firewall-cmd.sh
   ```
6. Add redirection to allow local node to communicate with server on cluster IP

   ```
   firewall-cmd --permanent --direct --add-rule ipv4 nat OUTPUT 0 -p tcp -o lo --dport 443 -j REDIRECT --to-ports 8181
   firewall-cmd --complete-reload
   ```
7. Open firewall for MariaDB replication:

   ```
   firewall-cmd --add-port=3306/tcp --permanent
   firewall-cmd --complete-reload
   ```

## 2. Configuration custom SSL certificate

All steps run as root user. All steps execute on first node of cluster.

Follow steps from [enabling HTTPS connectivity for nodes](https://docs.storware.eu/70/deployment/common-tasks/enabling-https-connectivity-for-nodes)

## 3. Storware node installation

Execute on all pacemaker nodes, and other Storware node machines.

1. Add Storware repository

   ```
   vi /etc/yum.repos.d/vProtect.repo
   ```

   ```
   # Storware Backup & Recovery - Enterprise backup solution for virtual environments repository
   [vprotect]
   name = vProtect
   baseurl = https://repo.storware.eu/storware/current/el8/
   gpgcheck = 0
   ```
2. Install Storware node

   ```
   dnf install -y vprotect-node
   ```
3. Initialize Storware node

   ```
   vprotect-node-configure
   ```
4. [Configure LVM filter](https://docs.storware.eu/70/deployment/common-tasks/lvm-setup-on-storware-backup-and-recovery-node-for-disk-attachment-backup-mode).
5. Only when we want backup Proxmox by export strategy.

```
cd /opt/vprotect/scripts/vma
./setup_vma.sh vprotect-vma-20180128.tar
```

## 4. Backup destination configuration

For multi-node/cluster environment for backup destination we suggest to use NFS, Object Storage, Deduplication appliances. In this example we use NFS.

Execute on all Storware node machines.

1. Add entry in `/etc/fstab` for automount NFS

   ```
   10.1.1.1:/vprotect /vprotect_data nfs defaults 0 0
   ```
2. Create directories for mount NFS share:

   ```
   mkdir /vprotect_data
   ```
3. Mount NFS share

   ```
   mount -a
   ```
4. Create subdirectories for backup destinations (run only on single node)

   ```
   mkdir /vprotect_data/backup
   mkdir /vprotect_data/backup/synthetic
   mkdir /vprotect_data/backup/filesystem
   mkdir /vprotect_data/backup/dbbackup
   ```
5. Add privileges for newly created shares

   ```
   chown vprotect:vprotect -R /vprotect_data
   ```

## 5. Cluster Configuration

Cluster is controlled by pacemaker.

### 5.1 Prepare operating system

All steps run as root user. Run that steps on all machines in pacemaker cluster:

1. Stop all services controlled by cluster, and disable autostart.

   ```
   systemctl stop vprotect-node
   systemctl stop vprotect-server
   systemctl disable vprotect-node
   systemctl disable vprotect-server
   ```

### 5.2 Set MariaDB replication

All steps run as root user. Run on all cluster nodes:

1. Create MariaDB user `replication` with password `vPr0tect` for replication:

   ```mysql
   CREATE USER replicator@'10.1.1.%' IDENTIFIED BY 'vPr0tect';
   GRANT SELECT,REPLICATION SLAVE,REPLICATION CLIENT ON *.* to replicator@'10.1.1.%' IDENTIFIED BY 'vPr0tect';
   ```
2. Add changes to /etc/my.cnf.d/server.cnf in `mysqld` section:

   ```
   [mysqld]
   lower_case_table_names=1
   log-bin=mysql-bin
   relay-log=relay-bin
   log-slave-updates
   max_allowed_packet=500M
   log_bin_trust_function_creators=1
   ```
3. Add changes to /etc/my.cnf.d/server.cnf in `mysqld` section:

   On vprotect1.local:

   ```
   server-id=10
   ```

   On vprotect2.local:

   ```
   server-id=20
   ```

   On vprotect3.local:

   ```
   server-id=30
   ```
4. Restart MariaDB service:

   ```
   systemctl restart mariadb
   ```
5. On each host show output from command:

   ```
   SHOW MASTER STATUS;
   ```

   Output from vprotect3.local:

   ```output
   +------------------+----------+--------------+------------------+
   | File             | Position | Binlog_Do_DB | Binlog_Ignore_DB |
   +------------------+----------+--------------+------------------+
   | mysql-bin.000006 |      374 |              |                  |
   +------------------+----------+--------------+------------------+
   1 row in set (0.000 sec)
   ```

   Output from vprotect1.local:

   ```output
   +------------------+----------+--------------+------------------+
   | File             | Position | Binlog_Do_DB | Binlog_Ignore_DB |
   +------------------+----------+--------------+------------------+
   | mysql-bin.000007 |      358 |              |                  |
   +------------------+----------+--------------+------------------+
   1 row in set (0.000 sec)
   ```

   Output from vprotect2.local:

   ```output
   +------------------+----------+--------------+------------------+
   | File             | Position | Binlog_Do_DB | Binlog_Ignore_DB |
   +------------------+----------+--------------+------------------+
   | mysql-bin.000004 |      358 |              |                  |
   +------------------+----------+--------------+------------------+
   1 row in set (0.000 sec)
   ```
6. Set replication on each MariaDB server:

   Execute on vprotect1.local:

   ```
   CHANGE MASTER TO
   MASTER_HOST='10.1.1.4',
   MASTER_PORT=3306,
   MASTER_USER='replicator',
   MASTER_PASSWORD='vPr0tect',
   MASTER_LOG_FILE='mysql-bin.000006',
   MASTER_LOG_POS=37;
   ```

   Execute on vprotect2.local:

   ```
   CHANGE MASTER TO
   MASTER_HOST='10.1.1.2',
   MASTER_PORT=3306,
   MASTER_USER='replicator',
   MASTER_PASSWORD='vPr0tect',
   MASTER_LOG_FILE='mysql-bin.000007',
   MASTER_LOG_POS=358;
   ```

   Execute on vprotect3.local:

   ```
   CHANGE MASTER TO
   MASTER_HOST='10.1.1.3',
   MASTER_PORT=3306,
   MASTER_USER='replicator',
   MASTER_PASSWORD='vPr0tect',
   MASTER_LOG_FILE='mysql-bin.000004',
   MASTER_LOG_POS=358;
   ```
7. Start replication MariaDB:\
   Execute on vprotect1.local:

   ```
   START SLAVE;
   ```

   Show output from command:

   ```
   SHOW SLAVE STATUS\G
   ```

   Wait until you see in output:

   ```
   Slave_IO_Running: Yes
   Slave_SQL_Running: Yes
   ```

   Repeat last step on host vprotect2.local and vprotect3.local.

#### 5.2.1 Make same passwords for vprotect user in MariaDB

{% hint style="info" %}
Execute only on first node of cluster.
{% endhint %}

1. Copy password from file `/opt/vprotect/payara.properties`

   ```
   eu.storware.vprotect.db.password=SECRETPASSWORD
   ```
2. Log in to MariaDB

   ```
   mysql -u root -p
   ```
3. Set password for vprotect user:

   ```
   SET PASSWORD FOR 'vprotect'@'localhost' = PASSWORD('SECRETPASSWORD');
   quit;
   ```
4. Copy configuration files from vprotect1.local to other cluster hosts

   ```
   cd /opt/vprotect/
   keystore.jks
   log4j2-server.xml
   payara.properties
   vprotect.env
   vprotect-keystore.jks
   license.key
   ```
5. Add permissions for copied files

   ```
   chown vprotect:vprotect -R /opt/vprotect/
   ```

### 5.3 Configure pacemaker

All steps run as root user.

#### 5.3.1 Run on every node in cluster

1. Install pacemaker packages

   ```
   dnf install -y pcs pacemaker fence-agents-all
   ```
2. Create SSH keys, and add them on other hosts to known.
3. Open ports on firewall

   ```
   firewall-cmd --permanent --add-service=high-availability
   firewall-cmd --reload
   ```
4. Start pcsd service

   ```
   systemctl start pcsd.service
   systemctl enable pcsd.service
   ```
5. Set same password for user hacluster

   ```
   passwd hacluster
   ```

#### 5.3.2 Run only on first node of cluster

1. Authenticate nodes of cluster

   ```
   pcs host auth vprotect1.local vprotect2.local vprotect3.local
   ```
2. Create cluster

   ```
   pcs cluster setup vp vprotect1.local vprotect2.local vprotect3.local
   ```
3. Run cluster

   ```
   pcs cluster start --all
   ```
4. Power off stonith

   ```
   pcs property set stonith-enabled=false
   ```
5. Create floating IP in cluster

   ```
   pcs resource create vp-vip1 IPaddr2 ip=10.1.1.4 cidr_netmask=23 --group vpgrp
   ```
6. Add vprotect-server to cluster

   ```
   pcs resource create "vp-vprotect-server.service" systemd:vprotect-server.service op start on-fail="stop" timeout="300s" op stop timeout="300s" op monitor timeout="300s" --group vpgrp
   ```
7. Add vprotect-node to cluster

   ```
   pcs resource create "vp-vprotect-node.service" systemd:vprotect-node.service op start on-fail="stop" timeout="300s" op stop timeout="300s" op monitor timeout="300s" --group vpgrp
   ```

## 6. Register Storware node on server (on all hosts)

1. Add certificate to trusted

   ```
   /opt/vprotect/scripts/node_add_ssl_cert.sh 10.1.1.5 443
   ```
2. Register node on server

   ```
   vprotect node -r ${HOSTNAME%%.*} admin https://10.1.1.5:443/api
   ```

## 7. Useful commands to control cluster:

For update, or service Storware unmanage services from cluster:

```
pcs resource unmanage vpgrp
```

Back to manage:

```
pcs resource manage vpgrp
```

Show status of cluster:

```
pcs status
```

Stop cluster node:

```
pcs cluster stop vprotect1.local
```

Stop all nodes of cluster:

```
pcs cluster stop --all
```

Start all nodes of cluster:

```
pcs cluster start --all

```

Clear old errors in cluster:

```
pcs resource cleanup
```


---

# Agent Instructions: Querying This Documentation

If you need additional information that is not directly available in this page, you can query the documentation dynamically by asking a question.

Perform an HTTP GET request on the current page URL with the `ask` query parameter:

```
GET https://docs.storware.eu/70/deployment/high-availability/3-node-cluster.md?ask=<question>
```

The question should be specific, self-contained, and written in natural language.
The response will contain a direct answer to the question and relevant excerpts and sources from the documentation.

Use this mechanism when the answer is not explicitly present in the current page, you need clarification or additional context, or you want to retrieve related documentation sections.
