Kubernetes

Storware Backup & Recovery Node preparation

Storware Backup & Recovery Node requires kubectl installed (you have to add Kubernetes repository to install kubectl) and kubeconfig with valid certificates (placed in /home/user/.kube) to connect to the Kubernetes cluster.

  1. Check if your kubeconfig looks the same as below.

Example:

   current-context: admin-cluster.local
   kind: Config
   preferences: {}
   users:
   - name: admin-cluster.local
     user:
       client-certificate-data: <REDACTED>
       client-key-data: <REDACTED>
  1. Copy configs to Storware Backup & Recovery Node. (Skip this and point 2 if you don't use Minikube)

    • If you use Minikube, you can copy the following files to Storware Backup & Recovery: sudo cp /home/user/.kube/config /opt/vprotect/.kube/config sudo cp /home/user/.minikube/{ca.crt,client.crt,client.key} /opt/vprotect/.kube

  2. Modify the paths in config so they point to /opt/vprotect/.kube instead of /home/user/.minikube. Example:

- name: minikube
  user:
    client-certificate: /opt/vprotect/.kube/client.crt
    client-key: /opt/vprotect/.kube/client.key
  1. Afterward, give permissions to the vprotect user:

Kubernetes Nodes should appear in Storware Backup & Recovery after indexing the cluster.

Note: Please provide the URL to the web console and SSH credentials to the master node when creating the OpenShift hypervisor manager in Storware Backup & Recovery WebUI. You can also use SSH public key authentication. This is needed for Storware Backup & Recovery to have access to your cluster deployments.

Note: Valid SSH admin credentials should be provided for every Kubernetes node by the user (called Hypervisor in the Storware Backup & Recovery WebUI). If Storware Backup & Recovery is unable to execute docker commands on the Kubernetes node, it means that it is logged as a user lacking admin privileges. Make sure you added your user to sudo/wheel group ( so it can execute commands with sudo).

Note: If you want to use Ceph you must provide ceph keyring and configuration. Ceph requires ceph-common and rbd-nbd packages installed.

Persistent volumes restore/backup

There are two ways of restoring the volume content.

  1. The user should deploy an automatic provisioner which will create persistent volumes dynamically. If Helm is installed, the setup is quick and easy https://github.com/helm/charts/tree/master/stable/nfs-server-provisioner.

  2. The user should manually create a pool of volumes. Storware Backup & Recovery will pick one of the available volumes with proper storage class to restore the content.

Limitations

  • currently, we support only backups of Deployments/DeploymentConfigs (persistent volumes and metadata)

  • all deployment's pods will be paused during the backup operation - this is required to achieve consistent backup data

  • for a successful backup, every object used by the Deployment/DeploymentConfig should have an app label assigned appropriately

  • a storage class must be defined in the Kubernetes environment for backup and restore operations to function properly

Supported features

Supported backup strategies: Helper pod, Ceph RBD

Helper pod
Ceph RBD

Minimal version

1.10

1.10

The last snapshot is kept on the system for incremental backups

Yes

Yes

Access to OS required

No

No

Proxy VM required

No

No

Full backup

Supported

Supported

Incremental backup

Not supportd

Supported *

Restore

Supported

Supported

File-level restore

Not supported

Supported *

Volume exclusion

Supported

Supported

Quiesced snapshots

Supported **

Supported **

Snapshots management

Not supported

Not supported

Pre/post command execution

Supported ***

Supported ***

Access to VM disk backup over iSCSI

Not supported

Supported *

Name-based policy assignment

Supported

Supported

Tag-based policy assignment

Supported

Supported

Power-on after restore

Supported

Supported

StatefulSet

Supported

Supported

* When using Ceph RBD as Persistent Volume

** Deployment pause

*** Only 'post'

Network requirements

Connection URL: https://API_HOST:6443

Source
Destination
Ports
Description

Node

Kubernetes API host

22/tcp

SSH access

Node

Kubernetes API host

6443/tcp

API access

Last updated