How to migrate from existing MongoDB data to Percona MongoDB

Hello

I am using Mongodb which AWS serves as “DocumentDB”.
I want to migrate all data into my k8s cluster’s mongodb.

But it seems “mongorestore” command not working on percona rs pods.

Is there kubernetes-native way for immigrate all data?

My percona server working with below configuration:
sharding=“True”,
pitr=“True”,
oplog=“True”,
exporting data to AWS S3 also working.

Hi @ciel ,

But it seems “mongorestore” command not working on percona rs pods.

Since you have a sharded cluster configured, the proper way to restore is via mongos and not shards (rs) directly.
Are you trying to restore via mongos instance? If not, where are you trying to restore? How was the backup taken and how did you try to restore it? What error are you getting?
We need the above information to suggest further.

Regards,
Vinodh Guruji

Thanks for taking the time to reply me.

I have no idea about handling data between mongodb and Percona mongodb.

Since the official document describes the way how to restore it from existing backup data(which generated at k8s psmdb operator) as applying CRDs,
I am confusing what should i do with non-k8s db data.

Do you mean I should create another client pod (which can do mongosh or some mongo cli commands) then apply operation like i did at non-k8s mongodb - not at one of the k8s mongo components?

Our team is working on a new tool (Percona Link for MongoDB), it’s not production ready yet, but you can try it out, probably the migration will go smoothly.

Thanks

1 Like

Thank you for introducing me to this new tool.
I was wondering below migration method (mongodumpmongorestore) is supported and safe to use with Percona MongoDB clusters.

Here is Scenario:

I am operating a Kubernetes-based environment using Percona Operator for MongoDB.

I would like to migrate data from an existing external MongoDB instance (not managed by Percona) to our Percona Server for MongoDB cluster deployed via the Operator.

My current migration plan is as follows:

  1. From a client pod inside the same Kubernetes cluster, we will execute the following:
  • Run mongodump against the external MongoDB instance to create a data dump.
  • Create connection via mongos.
  • Then run mongorestore to the target Percona MongoDB cluster (psmdb)
  1. The client pod has access to both:
  • The source MongoDB instance (via external network)
  • The target Percona MongoDB cluster (via internal Kubernetes service psmdb-db-mongos.percona.svc.cluster.local)

@Ivan_Groenewold @radoslaw.szulgo can advise

1 Like

I’m deeply grateful for your help.

To back up a sharded cluster with mongodump, you must stop the balancer, stop writes, and stop any schema transformation operations on the cluster. This helps reduce the likelihood of inconsistencies in the backup.

Did you?

Yes the mongodump process was successful and stable to restore another AWS DocumentDB.

I am just worried about using mongorestore command on Percona MongoDB, cause the official documentation guides to create psmdb-restore object via CRD.