Percona XtraDB Cluster Operator backups management

The Percona XtraDB Operator Backups documentation seems to be describing full backups but isn’t specific about this. Is it desirable to support daily incremental backups? I can imagine there is a restriction when using an S3 compatible interface for backup storage since you’d have to download the previous backup to read e.g LSNs, but not for the filesystem storage type.

For users with non-trivial databases (let’s say over a few tens of gigabytes in size), daily full backups would result in a large amount of data being created and it can take a long time.

Sub-question (and possible bug): for the above to be possible with the filesystem storage type, the job must have a PVC. I can see the documentation describing this and an example in the deploy/cr.yaml file (“daily-backup”). I can also see that my job has completed:


% kubectl get pods | grep daily
daily-backup-1572998400-knj75 0/1 Completed 0 7h33m

but I do not see any volumes or PVCs for the job:


% kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
cluster1-xb-cron-cluster1-20191031094835-18xf6 Bound pvc-81891b84-154a-4598-a2f7-a2ff0a68bd52 6Gi RWO standard 2d7h
cluster1-xb-cron-cluster1-20191101095210-18xf6 Bound pvc-fb1937da-7ea5-4b43-91a5-d8226ea02a3b 6Gi RWO standard 2d7h
cluster1-xb-cron-cluster1-20191105093402-h05wm Bound pvc-4b39f379-03a1-493b-aa39-1dffdcbb81a3 6Gi RWO standard 31h
cluster1-xb-cron-cluster1-20191106092710-h05wm Bound pvc-ccc55fba-6e82-4222-baec-c14493439202 6Gi RWO standard 7h34m
datadir-cluster1-pxc-0 Bound pvc-270606d9-e11b-4ba3-ad6d-52f694ccdde7 6Gi RWO local-path 2d7h
datadir-cluster1-pxc-1 Bound pvc-269e53f8-7e41-40c3-867c-69f2c3193975 6Gi RWO local-path 2d7h
datadir-cluster1-pxc-2 Bound pvc-6332a999-cf66-4baf-93ad-3d2f22031e90 6Gi RWO local-path 2d7h
datadir-cluster1-pxc-3 Bound pvc-6a88e735-29c5-4bf7-a0dd-86d3bd82529c 6Gi RWO local-path 104m
datadir-cluster1-pxc-4 Bound pvc-c105ee1d-af8f-4fb6-9ec0-c98b12937b4a 6Gi RWO local-path 103m
proxydata-cluster1-proxysql-0 Bound pvc-8296f94b-34da-4c0f-8435-8d5abc38b4cb 2Gi RWO local-path 2d7h
proxydata-cluster1-proxysql-1 Bound pvc-5d0cd5ac-40a0-4c8a-9e28-63bb4572acec 2Gi RWO local-path 99m

and there is no volume either. I can see them for the cron example.

Looking at the pod itself, it looks like the volume was not mounted into the pod at all:


% kubectl get pod daily-backup-1572998400-knj75 -o yaml | grep -A 4 volume
volumeMounts:
- mountPath: /var/run/secrets/kubernetes.io/serviceaccount
name: percona-xtradb-cluster-operator-token-g5brh
readOnly: true
dnsPolicy: ClusterFirst
--
volumes:
- name: percona-xtradb-cluster-operator-token-g5brh
secret:
defaultMode: 420
secretName: percona-xtradb-cluster-operator-token-g5brh

Is this a bug?

Thanks,
Martin