Percona Operator with bitnami/mongodb-sharded

Hi Everyone!

We have a bitnami/mongodb-sharded database running in our cluster as we use almost always open source versions, and we needed sharding.

My question is the following:

Can I use the Percona Operator in our Kubernetes cluster to back up the database automatically?

Sorry if the question is irrelevant, but I have not found enough information in the docs, as how to do what I aspire.

Do I have to use the percona server only?

Am I missing something?

( As I saw, when I deploy the stuff using helm, it deploys the percona server, and the rest of the guide is based on using that. )

Percona Backup for MongoDB (PBM) only work with Percona MongoDB Server.
If you simply rename the Mongodb image in Helm Bitnami to Percona Mongodb, the helm script will not work. Since the initialization and management scripts are included in their docker image by Bitnami.

Thank you for your answer, we have an active bitnami/mongodb-sharded so we can’t just migrate the data. As far as I can tell the filesystem copies, and mongodump are discouraged as they do not work with sharded clusters ( version 4.2+ ). Is our option only the MongoDB Atlas? Do you think is it at all possible to back up this data?

You input is very much apppriciated!

Why are you not using directly Percona Operator for MongoDB? What you need to have (sharding, backup, etc) is easily achievable with Percona Operator and it is fully open source.

Apologies, will do, my confusion was because when I wanted to install the operator on the kubernetes cluster, the docs pointed me to the mongodb backup doc, and the Percona server was part of the installation.

OK thanks for the info! Would you like sharing this broken link? The correct operator documentation should be available here Percona Operator for MongoDB

My apologies again, but I was looking at the wrong docs, the documentation links are fine.
I’m still not sure I can use the Percona operator to my existing bitnami deployment, the documentation of the operator suggests that backup of that is not really possible as stated here:

Please correct me if I’m wrong

Hi again. You are right you probably can not use the backup directly within your current Bitnami deployment. My recommendation would be to migrate from the Bitnami deployment to the Percona Operator one. There is a detailed comparison here Comparison with other solutions - Percona Operator for MongoDB that explains why it makes sense to do so once you need shards or really efficient backup. The migration could be based on a dump and restore to the new cluster.

Thank you for clearing up, this confirmed my suspicion.
Unfortunately mongodump and restore, and a even filesystem snapshots are discouraged after mongodb 4.2+ as they lose the atomicity.
Thank you again for clearing up.

This is correct if you are approaching them as backup methods, but they can still be used for a cold migration. Plus additional methods exist for doing a hot migration. So if I may ask is there a specific issue that obstructs you from replacing Bitnami with the Percona Operators in your environment?

We are doing exactly that and installed percona server and we are using minio for local hosting.
However I have ran into trouble with the backup deployment as a lot of the old tickets the starting deadline exceeded when starting. For some reason it can’t find the cluster ( even though the name of the cluster is correct ), or there is something wrong with the secret, but I think that is an issue with either minio keys or the way backup objects should look like.
Does the backup config really must have aws_acces_control_id, and aws_acces_control_key?

Hi I don’t have experience with minio, but the issues you are describing might be related with the backup storage.

Would you like sharing your configuration for backups (without key and id)? The backup config needs to have the key and id encoded in base64

Yes of course, I tried a lot of different solution to my current problem, but I’m stuck.

This is my on-demand backup which I use for testing:

apiVersion: psmdb.percona.com/v1
kind: PerconaServerMongoDBBackup
metadata:
  finalizers:
  - delete-backup
  name: backup-test
  namespace: experimental
spec:
  clusterName: percona-mongo-psmdb-d
  storageName: minio

I use this secret:

apiVersion: v1
kind: Secret
metadata:
  name: s3-secret
  namespace: experimental
type: Opaque
data:
  AWS_ACCES_KEY_ID: <minio-given-accesKey>
  AWS_ACCES_KEY_SECRET: <minio-given-secretKey>

And here is my server config:

apiVersion: helm.toolkit.fluxcd.io/v2beta1
kind: HelmRelease
metadata:
  name: percona-mongo
  namespace: experimental
spec:
  chart:
    spec:
      chart: psmdb-db
      sourceRef:
        kind: HelmRepository
        name: percona
        namespace: experimental
  interval: 1m0s
  values:
    sharding:
      enabled: true
    backup:
      enabled: true
      storages:
        minio:
          type: s3
          s3:
            bucket: test-bucket
            region: <configured-region>
            credentialsSecret: s3-secret
            endpointUrl: <MinIO Endpoints>

The percona server’s backup-agent pods are giving me this error constantly, even though I followed every instruction, including JIRA tickets that said to upgrade version, I use version 1.14.0.

2023-08-14T10:55:13.000+0000 E [agentCheckup] check storage connection: storage check failed with: get S3 object header: NoCredentialProviders: no valid providers in chain. Deprecated.
	For verbose messaging see aws.Config.CredentialsChainVerboseErrors

I tried creating different acces keys in minio and using that, even tried changing the keys to lowercase as someone suggested, but nothing worked so far.