# Kubernetes

## Storware Backup & Recovery Node preparation

Storware Backup & Recovery Node requires `kubectl` installed (you have to add Kubernetes repository to install `kubectl`) and `kubeconfig` with valid certificates (placed in `/home/user/.kube`) to connect to the Kubernetes cluster.

1. Check if your kubeconfig looks the same as below.

Example:

```yaml
   current-context: admin-cluster.local
   kind: Config
   preferences: {}
   users:
   - name: admin-cluster.local
     user:
       client-certificate-data: <REDACTED>
       client-key-data: <REDACTED>
```

1. Copy configs to Storware Backup & Recovery Node. (**Skip this and point 2 if you don't use Minikube**)
   * If you use Minikube, you can copy the following files to Storware Backup & Recovery: `sudo cp /home/user/.kube/config /opt/vprotect/.kube/config sudo cp /home/user/.minikube/{ca.crt,client.crt,client.key} /opt/vprotect/.kube`
2. Modify the paths in `config` so they point to `/opt/vprotect/.kube` instead of `/home/user/.minikube`. Example:

```yaml
- name: minikube
  user:
    client-certificate: /opt/vprotect/.kube/client.crt
    client-key: /opt/vprotect/.kube/client.key
```

1. Afterward, give permissions to the `vprotect` user:

```
chown -R vprotect:vprotect /opt/vprotect/.kube
```

![](https://content.gitbook.com/content/0FWMFN0y1yUTAd3cSRaK/blobs/e0Z96CKO9ffqeLE3AYcR/protecting_ve-containers-kubernetes.png)

Kubernetes Nodes should appear in Storware Backup & Recovery after indexing the cluster.

{% hint style="info" %}
**Note**: Please provide the URL to the web console and SSH credentials to the master node when creating the OpenShift hypervisor manager in Storware Backup & Recovery WebUI. You can also use [SSH public key authentication](https://docs.storware.eu/backup-and-recovery-7.4/deployment/common-tasks/ssh-public-key-authentication). This is needed for Storware Backup & Recovery to have access to your cluster deployments.
{% endhint %}

{% hint style="info" %}
**Note:** Valid SSH admin credentials should be provided **for every Kubernetes node** by the user (called *Hypervisor* in the Storware Backup & Recovery WebUI). If Storware Backup & Recovery is unable to execute docker commands on the Kubernetes node, it means that it is logged as a user lacking admin privileges. Make sure you added your user to sudo/wheel group ( so it can execute commands with `sudo`).
{% endhint %}

{% hint style="info" %}
**Note:** If you want to use Ceph you must provide ceph keyring and configuration. Ceph requires ceph-common and rbd-nbd packages installed.
{% endhint %}

### **Persistent volumes restore/backup**

There are two ways of restoring the volume content.

1. The user should deploy an automatic provisioner which will create persistent volumes dynamically. If Helm is installed, the setup is quick and easy <https://github.com/helm/charts/tree/master/stable/nfs-server-provisioner>.
2. The user should manually create a pool of volumes. Storware Backup & Recovery will pick one of the available volumes with proper storage class to restore the content.

## **Requirements**

* Installed Velero

## **Limitations**

* currently, we support only backups of Deployments/DeploymentConfigs (persistent volumes and metadata)
* **all deployment's pods will be paused during the backup operation** - this is required to achieve consistent backup data
* for a successful backup, every object used by the Deployment/DeploymentConfig should have an `app` label assigned appropriately
* a storage class must be defined in the Kubernetes environment for backup and restore operations to function properly

## Supported features

###

**Supported backup strategies:** Helper pod, Ceph RBD

<table data-full-width="false"><thead><tr><th width="274"></th><th width="240">Helper pod</th><th width="230">Ceph RBD</th></tr></thead><tbody><tr><td>Minimal version</td><td>1.30</td><td>1.30</td></tr><tr><td>The last snapshot is kept on the system for incremental backups</td><td>Yes</td><td>Yes</td></tr><tr><td>Access to OS required</td><td>No</td><td>No</td></tr><tr><td>Proxy VM required</td><td>No</td><td>No</td></tr></tbody></table>

<table data-header-hidden><thead><tr><th width="273"></th><th width="239">Helper pod</th><th>Ceph RBD</th></tr></thead><tbody><tr><td>Full backup</td><td>Supported</td><td>Supported</td></tr><tr><td>Incremental backup</td><td>Not supported</td><td>Supported *</td></tr><tr><td>Synthetic backups</td><td>Not supported ****</td><td>Supported</td></tr><tr><td>File-level restore</td><td>Not supported</td><td>Supported *</td></tr><tr><td>Volume exclusion</td><td>Supported</td><td>Supported</td></tr><tr><td>Quiesced snapshots</td><td>Supported **</td><td>Supported **</td></tr><tr><td>Snapshots management</td><td>Not supported</td><td>Not supported</td></tr><tr><td>Pre/post command execution</td><td>Supported ***</td><td>Supported ***</td></tr><tr><td>Access to VM disk backup over iSCSI</td><td>Not supported</td><td>Supported *</td></tr><tr><td>Name-based policy assignment</td><td>Supported</td><td>Supported</td></tr><tr><td>Tag-based policy assignment</td><td>Supported</td><td>Supported</td></tr><tr><td>Power-on after restore</td><td>Supported</td><td>Supported</td></tr><tr><td>StatefulSet</td><td>Supported</td><td>Supported</td></tr></tbody></table>

*\* When using Ceph RBD as Persistent Volume*

*\*\* Deployment pause*

*\*\*\* Only 'post'*

*\*\*\*\** A synthetic backup destination can be used, but this strategy only supports full backups

## Network requirements

**Connection URL:** `https://API_HOST:6443`

| Source | Destination         | Ports    | Description |
| ------ | ------------------- | -------- | ----------- |
| Node   | Kubernetes API host | 22/tcp   | SSH access  |
| Node   | Kubernetes API host | 6443/tcp | API access  |
