If someone has a moment, can anyone explain how “destination” and “storageName” specs work behind the scene? What are configuration possibilities in restore.yaml?
The issue:
I have made a backup and this is visible via kubectl get pxc-backup -A command (on older cluster, but this is not the case in this context).
I want to restore to my local cluster. This is completely different environment. So I built PXC there as usual. It’s working well and is just empty. Used same secrets from original cluster (also not the case for this issue).
So I’m not using AWS to put it in S3 bucket for now and moved these files to PVC in this target local cluster.
Files are:
md5sum.txt sst_info xtrabackup.stream
They are definitely in c62e3d10-075b-4e45-a8c6-366c5eced344
I’ve tried many variations, but error is always same:
Warning FailedScheduling 20s (x2 over 20s) default-scheduler persistentvolumeclaim “pvc-c62e3d10-075b-4e45-a8c6-366c5eced344” not found
In this new cluster, I don’t have anything via kubectl get pxc-backup -A. I want to restore xtrabackup.stream from another cluster which was uploaded to PVC.
That PVC was released.
Should I leave PVC not in Bound state then it’s unused and ready for pxc-restore?
Do I have some options as alternatives or adjustments that can help pxc-restore (for instance to upload xtrabackup.stream using ‘kubectl cp’ to restoration pod, etc?
Is it pure Kubernetes volume error and nothing to do with PXC?
Maybe I was doing something stupid so rubber duck debugging would be helpful (:
When backup marked as succeeded via kubectl get pxc-backup -n percona-l, I accessed the content of this PVC and replaced these files with required backup: md5sum.txt sst_info xtrabackup.stream (so that’s the main trick if the goal is to see something working)
Initiated restore by doing kubectl apply -f restore.yaml
So I guess, this command can be used as a checker if destination is correct:
kubectl get pvc/xb-cron-mattermost-clust-20210101000000-00000 -n percona-l
and sounds like pxc-backup object is necessary, right? So this command below prints the name that should be used in restore.yaml in DESTINATION column:
kubectl get pxc-backup -n percona-l
Please correct me if I’m wrong. In fact, I’m sure that I’m doing it incorrectly.
So anyway, that worked! If there is a method to avoid step 6 and do differently at this date, that would be useful to follow recommended process.
thank you for submitting this and specifically for the way how you solved it.
Unfortunately, you are correct: there is no “operator”-way to restore PXC on another Kubernetes cluster, if backups are stored on PVC.
Main reason - new k8s is not aware about the PVC from another k8s cluster. We will discuss internally and with the community to see how this can be solved without step 6
On the other hand, if your backups are stored on S3, you can easily recover PXC on another Kubernetes cluster. It is described in our docs:
Yep, S3 would be easier and is recommended approach to have backup in completely different place. In my case, it’s in PVC on NFS and system is not sensitive at this phase. Anyway, there are sufficient methods to unblock activities while working with Percona XtraDB until it will be updated with more simplified restore options. Many thanks for that.
I also overlooked MinIO service for those who are not in public cloud. It has distributed/HA mode, putonly/write-only mode for security purposes and is S3 compatible. Of course, you are responsible to ensure security and should administrate it. MinIO | Deploy MinIO on Docker Compose . Hope that helps someone.