Restore backup to another cluster

Hi Percona guys,

If someone has a moment, can anyone explain how “destination” and “storageName” specs work behind the scene? What are configuration possibilities in restore.yaml?

restore.yaml
apiVersion: pxc.percona.com/v1
kind: PerconaXtraDBClusterRestore
metadata:
name: restore-mattermost
namespace: percona-l
spec:
pxcCluster: my-percona-cluster
backupSource:
destination: pvc/pvc-14834c75-f3b4-427c-80ee-3f82160e48b7
storageName: pvc

The issue:
I have made a backup and this is visible via kubectl get pxc-backup -A command (on older cluster, but this is not the case in this context).

I want to restore to my local cluster. This is completely different environment. So I built PXC there as usual. It’s working well and is just empty. Used same secrets from original cluster (also not the case for this issue).

So I’m not using AWS to put it in S3 bucket for now and moved these files to PVC in this target local cluster.
Files are:
md5sum.txt sst_info xtrabackup.stream

They are definitely in c62e3d10-075b-4e45-a8c6-366c5eced344

I’ve tried many variations, but error is always same:

Warning FailedScheduling 20s (x2 over 20s) default-scheduler persistentvolumeclaim “pvc-c62e3d10-075b-4e45-a8c6-366c5eced344” not found

In this new cluster, I don’t have anything via kubectl get pxc-backup -A. I want to restore xtrabackup.stream from another cluster which was uploaded to PVC.

That PVC was released.

Should I leave PVC not in Bound state then it’s unused and ready for pxc-restore?

Do I have some options as alternatives or adjustments that can help pxc-restore (for instance to upload xtrabackup.stream using ‘kubectl cp’ to restoration pod, etc?

Is it pure Kubernetes volume error and nothing to do with PXC?

Maybe I was doing something stupid so rubber duck debugging would be helpful (:

Many thanks

1 Like

I have managed to solve it, not directly, but it may expand the thoughts on what didn’t work.

Complete steps:

  1. Made a backup with pxc-backup on source cluster (it is doing the backup based on cr.yaml)
  2. Downloaded files md5sum.txt sst_info xtrabackup.stream from PVC or using copy-backup.sh
  3. Created secrets in a destination cluster, they matched secrets in source cluster
  4. Created a PXC cluster
  5. Just arranged on-demand backup for empty cluster by creating an object with kubectl apply -f backup-object.yaml
    apiVersion: pxc.percona.com/v1
    kind: PerconaXtraDBClusterBackup
    metadata:
      labels:
        ancestor: daily-backup
        cluster: my-percona-cluster
        type: cron
      name: cron-mattermost-clust-20210101000000-00000
      namespace: percona-l
    spec:
      pxcCluster: my-percona-cluster
      storageName: fs-pvc
    status:
      completed: "2021-01-01T00:00:00Z"
      destination: pvc/restore-mattermost
      state: Succeeded
      storageName: fs-pvc
  1. When backup marked as succeeded via kubectl get pxc-backup -n percona-l, I accessed the content of this PVC and replaced these files with required backup: md5sum.txt sst_info xtrabackup.stream (so that’s the main trick if the goal is to see something working)

  2. Initiated restore by doing kubectl apply -f restore.yaml

    apiVersion: pxc.percona.com/v1
    kind: PerconaXtraDBClusterRestore
    metadata:
      name: restore-mattermost
      namespace: percona-l
    spec:
      pxcCluster: my-percona-cluster
      backupSource:
        destination: pvc/xb-cron-mattermost-clust-20210101000000-00000
        storageName: pvc

So I guess, this command can be used as a checker if destination is correct:

kubectl get pvc/xb-cron-mattermost-clust-20210101000000-00000 -n percona-l

and sounds like pxc-backup object is necessary, right? So this command below prints the name that should be used in restore.yaml in DESTINATION column:

kubectl get pxc-backup -n percona-l

Please correct me if I’m wrong. In fact, I’m sure that I’m doing it incorrectly.

So anyway, that worked! If there is a method to avoid step 6 and do differently at this date, that would be useful to follow recommended process.

Thanks

1 Like

Hello @Laimis,

thank you for submitting this and specifically for the way how you solved it.
Unfortunately, you are correct: there is no “operator”-way to restore PXC on another Kubernetes cluster, if backups are stored on PVC.
Main reason - new k8s is not aware about the PVC from another k8s cluster. We will discuss internally and with the community to see how this can be solved without step 6 :slight_smile:

On the other hand, if your backups are stored on S3, you can easily recover PXC on another Kubernetes cluster. It is described in our docs:

...
backupSource:
  destination: s3://S3-BUCKET-NAME/BACKUP-NAME
  s3:
    credentialsSecret: my-cluster-name-backup-s3
    region: us-west-2
    endpointUrl: https://URL-OF-THE-S3-COMPATIBLE-STORAGE
1 Like

Yep, S3 would be easier and is recommended approach to have backup in completely different place. In my case, it’s in PVC on NFS and system is not sensitive at this phase. Anyway, there are sufficient methods to unblock activities while working with Percona XtraDB until it will be updated with more simplified restore options. Many thanks for that.

1 Like

I also overlooked MinIO service for those who are not in public cloud. It has distributed/HA mode, putonly/write-only mode for security purposes and is S3 compatible. Of course, you are responsible to ensure security and should administrate it. MinIO | Deploy MinIO on Docker Compose . Hope that helps someone.

1 Like

I was not right. Read, write and delete access is needed to check and clean up backups.

1 Like