But it seems “mongorestore” command not working on percona rs pods.
Since you have a sharded cluster configured, the proper way to restore is via mongos and not shards (rs) directly.
Are you trying to restore via mongos instance? If not, where are you trying to restore? How was the backup taken and how did you try to restore it? What error are you getting?
We need the above information to suggest further.
I have no idea about handling data between mongodb and Percona mongodb.
Since the official document describes the way how to restore it from existing backup data(which generated at k8s psmdb operator) as applying CRDs,
I am confusing what should i do with non-k8s db data.
Do you mean I should create another client pod (which can do mongosh or some mongo cli commands) then apply operation like i did at non-k8s mongodb - not at one of the k8s mongo components?
Our team is working on a new tool (Percona Link for MongoDB), it’s not production ready yet, but you can try it out, probably the migration will go smoothly.
Thank you for introducing me to this new tool.
I was wondering below migration method (mongodump → mongorestore) is supported and safe to use with Percona MongoDB clusters.
Here is Scenario:
I am operating a Kubernetes-based environment using Percona Operator for MongoDB.
I would like to migrate data from an existing external MongoDB instance (not managed by Percona) to our Percona Server for MongoDB cluster deployed via the Operator.
My current migration plan is as follows:
From a client pod inside the same Kubernetes cluster, we will execute the following:
Run mongodump against the external MongoDB instance to create a data dump.
Create connection via mongos.
Then run mongorestore to the target Percona MongoDB cluster (psmdb)
The client pod has access to both:
The source MongoDB instance (via external network)
The target Percona MongoDB cluster (via internal Kubernetes service psmdb-db-mongos.percona.svc.cluster.local)
To back up a sharded cluster with mongodump, you must stop the balancer, stop writes, and stop any schema transformation operations on the cluster. This helps reduce the likelihood of inconsistencies in the backup.
Hello @radoslaw.szulgo
At the end I deleted all resources and re-installed psmdb.
It seems like Percona operator does not do failover process when it comes to FULL CLUSTER CRASH error.
Hopefully there is a way to help percona operator.
Below is what I’ve done:
set sharding.balancer.enabled: false
ran mongorestore command on bastion server via mongos
it failed with two reasons:
the pv was not enough than whole data size.
rs pod got OOM error cause the cpu and resource limit was too small.
After few minutes the FULL CLUSTER CRASH error has occurred.
It seems like the operator does not reflect any modification for pv size, resource limits when there is some error on db cluster.
The error didn’t go away for a few days until I deleted the helm chart.