Cannot restore backups on demand with pbmctl on PSMDB running on K8s

We recently deployed the Percona Server for MongoDB operator on our Kubernetes Cluster following the blog (…ubernetes.html)
For backups. we configured and have been using S3 as storage. The backups are working fine with both on-demand as well as YAML configuration strategies.
Restoring the backups is where the problem lies.
If we were to use the restore.yaml the restoration process works. However, we could not get the on-demand restoration to work.

As said in the blog, we launched a pod with image percona/percona-server-mongodb-operator:0.3.0-backup-pbmctl and tried listing backups and it gives the error:

$ kubectl run -it -n psmdb --rm pbmctl --image=percona/percona-server-mongodb-operator:0.3.0-backup-pbmctl --restart=Never – list backups --server-address=db-cluster-master-backup-coordinator:10001

ERRO[0000] Cannot get the list of available backups: Cannot get the connected agents list: rpc error: code = Unauthenticated desc = Request unauthenticated with bearer

So we do not know what the name of the backup file is.

Our on demand backup code looks like the following:

$ kubectl run -n psmdb -it --rm pbmctl --image=percona/percona-server-mongodb-operator:0.3.0-backup-pbmctl --restart=Never –
run backup
–storage s3-eu-west

Which works. It creates three files with a timestamp, 2019-11-25T06:37:30Z.json, 2019-11-25T06:37:30Z_rs0.dump.gz, 2019-11-25T06:37:30Z_rs0.oplog.gz

So we did try restoring with the command

$ kubectl run -n psmdb -it --rm pbmctl --image=percona/percona-server-mongodb-operator:0.3.0-backup-pbmctl --restart=Never –
run restore
–storage s3-eu-west
<< backup-name >>

We replaced the <<backup-name >> with the file name in S3 bucket, as well as the 


description which we gave during the on demand backup. But nothing worked.
The error it gives is:

FATA[0000] Cannot send the RestoreBackup command to the gRPC server: rpc error: code = Unknown desc = invalid backup metadata file /data/psmdb-backup: open /data/backup-latest: no such file or directory
pod “pbmctl” deleted

Is there something that we are missing.
We are using [B]percona/percona-server-mongodb-operator:1.2.0-mongod3.6 [/B]as the cluster db.

Hi there, sorry for the delay in response.
The issue is that you’ve set the wrong backup-name while trying to restore. In this particular example in should be 2019-11-25T06:37:30Z instead of backup-latest.
This timestamp is the backup’s name.
A list of the available backups can be obtained with the list backups command:

Hope this helps?

We did find that once we executed inside the pods running the backup container. i.e. <<cluster-name>>-backup-coordinator-0

But why isn’t it working with use and throw pod? That we launch for the same purpose

$ kubectl run -it -n psmdb --rm pbmctl --image=percona/percona-server-mongodb-operator:0.3.0-backup-pbmctl
–restart=Never – list backups --server-address=db-cluster-master-backup-coordinator:10001

Cannot get the list of available backups: Cannot get the connected agents list: rpc error:
code = Unauthenticated desc = Request unauthenticated with bearer

Also how do you restore backups in case of disaster? i.e. if json files are deleted from /data directory.
The work around i see would be to download the json file from S3 bucket locally and restore it using .
But looks like the backup-coordinator image does not even have installed which is required for kubectl cp to succeed

Hello again sdotel

I just checked in with the team on this, and they have said that they are aware of both issues you have encountered. They are both on the work plan for fixing in release 1.4.0 of Percona Server for MongoDB Operator.

Sincere apologies for any inconvenience. I don’t have a time frame for you on the release, but you can follow along in, follow the project on github, or sign up for the weekly release newsletter to receive availability notice.