Add-ons v1
EDB Postgres for Kubernetes supports add-ons that can be enabled on a per-cluster basis. These add-ons are:
Note
If you are planning to use Velero in OpenShift, please refer to the OADP section in the Openshift documentation.
All add-ons will automatically be available to the operator and to be used will
need to be enabled at the cluster level via the k8s.enterprisedb.io/addons
annotation.
External Backup Adapter
The external backup adapter add-ons provide a generic way to integrate EDB Postgres for Kubernetes in a third-party tool for backups through customizable ways to identify via labels and/or annotations:
- which PVC group to backup
- which PVCs to exclude, in case the cluster has one or more active replicas
- the Pod running the PostgreSQL instance that has been selected for the backup (a standby or, if not available, the primary)
You can choose between two add-ons that only differ from each other for the way they allow you to configure the adapter for your backup system:
external-backup-adapter
: in case you want to customize the behavior at the operator's configuration level via either a config map or a secret - and share it with all the Postgres clusters that are managed by the operator's deployment (see theexternal-backup-adapter
section below)external-backup-adapter-cluster
: in case you want to customize the behavior of the adapter at the Postgres cluster level, through a specific annotation (see theexternal-backup-adapter-cluster
section below)
Such add-ons allow you to define the names of the annotations that will contain the commands to be run before or after taking a backup in the pod selected by the operator.
As a result, any third-party backup tool for Kubernetes can rely on the above to coordinate itself with a PostgreSQL cluster, or a set of them.
Recovery simply relies on the operator to reconcile the cluster from an existing PVC group.
Important
The External Backup Adapter is not a tool to perform backups. It simply provides a generic interface that any third-party backup tool in the Kubernetes space can use. Such tools are responsible for safely storing the PVC and/or the content, and make it available at recovery time together with all the necessary resource definitions of your Kubernetes cluster.
Customizing the adapter
As mentioned above, the adapter can be configured in two ways, which then
determines the actual add-on
you need to use in your Cluster
resource.
If you are planning to define the same behavior for all the Postgres Cluster
resources managed by the operator, we recommend that you use the
external-backup-adapter
add-on, and configure the annotations/labels in the
operator's configuration.
If you are planning to have different behaviors for a subset of the Postgres
Cluster
resources that you have, we recommend that you use the
external-backup-adapter-cluster
add-on.
Both add-ons share the same capabilities in terms of customization, which needs to be defined as a YAML object having the following keys:
electedResourcesDecorators
excludedResourcesDecorators
backupInstanceDecorators
preBackupHookConfiguration
postBackupHookConfiguration
Each section is explained below. Further down you'll find the instructions on how to customize each of the two add-ons, with some examples.
The electedResourcesDecorators
section
This section allows you to configure an array of labels and/or annotations that will be put on every elected PVC group.
Each element of the array must have the following fields:
key
: the name of the key for the label or annotation
metadataType
: the type of metadata, either "label"
or "annotation"
value
: the value that will be assigned to the label or annotation
The excludedResourcesDecorators
section
This section allows you to configure an array of labels and/or annotations that will be placed on every excluded pod and PVC.
Each element of the array must have the same fields as the
electedResourcesDecorators
section above.
The backupInstanceDecorators
section
This section allows you to configure an array of labels and/or annotations that will be placed on the instance that has been selected for the backup by the operator and which contains the hooks to be run.
Each element of the array must have the same fields as the
electedResourcesDecorators
section above.
The preBackupHookConfiguration
section
This section allows you to control the names of the annotations in which the
operator will place the name of the container, the command to run before taking
the backup, and the command to run in case of error/abort on the third-party
tool side. Such metadata will be applied on the instance that's been selected by
the operator for the backup (see backupInstanceDecorators
above).
The following fields must be provided:
container
: Specifies where to place the information about the container
that will run the pre-backup command. The container name is a fixed value and
cannot be configured. Will be saved in the annotations. To decorate the pod
with hooks refer to: instanceWithHookDecorators
command
: Specifies where to place the information about the command that
will be executed before the backup is taken. The command that will be
executed is a fixed value and cannot be configured. Will be saved in the
annotations. To decorate the pod with hooks refer to:
instanceWithHookDecorators
onError
: Specifies where to place the information about the command that
will be executed in case of an error. The command that will be executed is a
fixed value and cannot be configured. Will be saved in the annotations. To
decorate the pod with hooks refer to: instanceWithHookDecorators
The postBackupHookConfiguration
section
This section allows you to control the names of the annotations in which the
operator will place the name of the container and the command to run after taking
the backup. Such metadata will be applied on the instance that's been selected by
the operator for the backup (see backupInstanceDecorators
above).
The following fields must be provided:
container
: Specifies where to place the information about the container
that will run the post-backup command. The container name is a fixed value
and cannot be configured. Will be saved in the annotations. To decorate the pod
with hooks refer to: instanceWithHookDecorators
command
: Specifies where to place the information about the command that
will be executed after the backup is taken. The command that will be executed
is a fixed value and cannot be configured. Will be saved in the annotations. To
decorate the pod with hooks refer to: instanceWithHookDecorators
The external-backup-adapter
add-on
The external-backup-adapter
add-on can be entirely configured at operator's
level via the EXTERNAL_BACKUP_ADDON_CONFIGURATION
field in the operator's
ConfigMap
/Secret
.
For more information, please refer to the provided sample file at the end of this section, or the example below:
The add-on can be activated by adding the following annotation to the Cluster
resource:
The external-backup-adapter-cluster
add-on
The external-backup-adapter-cluster
add-on must be configured in each
Cluster
resource you intend to use it through the
k8s.enterprisedb.io/externalBackupAdapterClusterConfig
annotation - which
accepts the YAML object as content - as outlined in the following example:
About the fencing annotation
If the configured external backup adapter backs up annotations, the fencing
annotation will be set by the pre-backup hook and persist to the restored
cluster. After restoring the cluster, you will need to manually remove the
fencing annotation from the Cluster
object to fix this.
This can be done with the cnp
plugin for kubectl:
Or, if you don't have the cnp
plugin, you can remove the fencing annotation
manually with the following command:
Please refer to the fencing documentation for more information.
Limitations
As far as the backup part is concerned, currently, the EDB Postgres for
Kubernetes integration with external-backup-adapter
and
external-backup-adapter-cluster
supports cold backups only. These are
also referred to as offline backups. This means that the selected replica
is temporarily fenced so that external-backup-adapter
and
external-backup-adapter-cluster
can take a physical snapshot of the PVC group -
namely the PGDATA
volume and, where available, the WAL volume.
In this short timeframe, the standby cannot accept read-only connections.
If no standby is available - usually because we're in a single instance cluster -
and the annotation k8s.enterprisedb.io/snapshotAllowColdBackupOnPrimary
is
set to true, external-backup-adapter
and external-backup-adapter-cluster
will temporarily fence the primary, causing downtime in terms of read-write
operations. This use case is normally left to development environments.
Full example of YAML file
Here is a full example of YAML content to be placed in either:
- the
EXTERNAL_BACKUP_ADDON_CONFIGURATION
option as part of the the operator's configuration process described above for theexternal-backup-adapter
add-on, or - in the
k8s.enterprisedb.io/externalBackupAdapterClusterConfig
annotation for theexternal-backup-adapter-cluster
add-on
Hint
Copy the content below and paste it inside the ConfigMap
or Secret
that
you use to configure the operator or the annotation in the Cluster
, making
sure you use the |
character that YAML reserves for literals,
as well as proper indentation. Use the comments to help you customize the
options for your tool.
Kasten
Kasten is a very popular data protection tool for Kubernetes, enabling backup and restore, disaster recovery, and application mobility for Kubernetes applications. For more information, see the Kasten website and the Kasten by Veeam Implementation Guide
In brief, to enable transparent integration with Kasten on an EDB Postgres for
Kubernetes Cluster, you just need to add the kasten
value to the
k8s.enterprisedb.io/addons
annotation in a Cluster
spec. For example:
Once the cluster is created and healthy, the operator will select the farthest ahead replica instance to be the designated backup and will add Kasten-specific backup hooks through annotations and labels to that instance.
Important
The operator will refuse to shut down a primary instance to take a cold backup unless the Cluster is annotated with
For further guidance on how to configure and use Kasten, see the Implementation Guide's Configuration and Using sections.
Limitations
As far as the backup part is concerned, currently, the EDB Postgres for
Kubernetes integration with Kasten supports cold backups only. These are
also referred to as offline backups. This means that the selected replica
is temporarily fenced so that external-backup-adapter can take a physical
snapshot of the PVC group - namely the PGDATA
volume and, where available,
the WAL volume.
In this short timeframe, the standby cannot accept read-only connections.
If no standby is available - usually because we're in a single instance cluster -
and the annotation k8s.enterprisedb.io/snapshotAllowColdBackupOnPrimary
is
set to true, Kasten will temporarily fence the primary, causing downtime in
terms of read-write operations. This use case is normally left to development
environments.
In terms of recovery, the integration with Kasten supports snapshot recovery only. No Point-in-Time Recovery (PITR) is available at the moment with the Kasten add-on, and RPO is determined by the frequency of the snapshots in your Kasten environment. If your organization relies on Kasten, this usually is acceptable, but if you need PITR we recommend you look at the native continuous backup method on object stores.
Velero
Velero is an open-source tool to safely back up, restore, perform disaster
recovery, and migrate Kubernetes cluster resources and persistent volumes. For
more information, see the Velero documentation.
To enable Velero compatibility with an EDB Postgres for Kubernetes Cluster, add
the velero
value to the k8s.enterprisedb.io/addons
annotation in a Cluster
spec.
For example:
Once the cluster is created and healthy, the operator will select the farthest ahead replica instance to be the designated backup and will add Velero-specific backup hooks as annotations to that instance.
These annotations are used by Velero to run the commands to prepare the Postgres instance to be backed up.
Important
The operator will refuse to shut down a primary instance to take a cold backup unless the Cluster is annotated with
Limitations
As far as the backup part is concerned, currently, the EDB Postgres for
Kubernetes integration with Velero supports cold backups only. These are
also referred to as offline backups. This means that the selected replica
is temporarily fenced so that external-backup-adapter can take a physical
snapshot of the PVC group - namely the PGDATA
volume and, where available,
the WAL volume.
In this short timeframe, the standby cannot accept read-only connections.
If no standby is available - usually because we're in a single instance cluster -
and the annotation k8s.enterprisedb.io/snapshotAllowColdBackupOnPrimary
is
set to true, Velero will temporarily fence the primary, causing downtime in
terms of read-write operations. This use case is normally left to development
environments.
In terms of recovery, the integration with Velero supports snapshot recovery only. No Point-in-Time Recovery (PITR) is available at the moment with the Velero add-on, and RPO is determined by the frequency of the snapshots in your Velero environment. If your organization relies on Velero, this usually is acceptable, but if you need PITR we recommend you look at the native continuous backup method on object stores.
Backup
By design, EDB Postgres for Kubernetes offloads as much of the backup functionality to Velero as possible, with the only requirement to make available the previously mentioned backup hooks. Since EDB Postgres for Kubernetes transparently sets all the needed configurations, and the rest is standard Velero, using Velero to backup a Postgres cluster is as straightforward as it would be for any other object. For example:
This command will create a standard Velero backup using the configured object storage and the configured Snapshot API.
Important
By default, the Velero add-on exclude only a few resources from the backup
operation, namely pods and PVCs of the instances that have not been selected
(as you recall, the operator tries to backup the PVCs of the first replica).
However, you can use the options for the velero backup
command to fine tune
the resources you want to be part of your backup.
Restore
As with backup, the recovery process is a standard Velero procedure. The command to restore from a backup created with the above parameters would be:
- On this page
- External Backup Adapter
- Kasten
- Velero