PerconaServerMongoDBRestore selective not possible in K8s with psmdb-db 1.18.0

Hi everyone,

Here is my configuration of a test cluster for backup/restore/update:

  • Kubernetes: 1.29.10
  • Helm chart psmdb-operator: 1.18.0 (1.19.0 impossible to deploy)
  • Helm Chart psmdb-db: 1.18.0 (1.19.0 impossible to deploy)
  • Mongod: 6.0.19-16-multi
  • Operator: 1.18.0
  • PBM: 2.7.0-multi (2.8.0-multi error with index for restore)

So I want to do a selective restore of a specific mongo dbs.

For that I have a complete logical backup that is done, existing and that I can restore completely without worries!

But if I want to do a selective restore I have 2 choices and neither works properly:

  • K8s manifest (does not work at all)
  • PBM CLI (works but after the restore the PBM status indicates that the restore is still in progress !!? after 12h never finished !)

What interests me first is with K8s manifest so I make the file as below :

apiVersion: psmdb.percona.com/v1
kind: PerconaServerMongoDBRestore
metadata:
  name: xxxxx-restore-logical-selectif-test-1
spec:
  clusterName: "psmdb-db-xxxxxxx"
  selective:
    withUsersAndRoles: true
    namespaces:
      - "dbs_xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx.*"
  backupSource:
    type: "logical"
    destination: "s3://xxxxx-data-backups-xxxxxxx/2025-02-04T11:30:21Z"
    s3:
      credentialsSecret: "xxx-xxxxxxxx-backup-s3"
      region: "fr-par"
      bucket: "xxxxx-data-backups-xxxxxxxx"
      endpointUrl: "https://s3.fr-par.xxx.xxxx"

Source reference is :

Unfortunately when applying the manifest, kubectl apply -f pbm-restore-logical-selectif-test-1.yaml I have the following error:

Error from server (BadRequest): error when creating "pbm-restore-logical-selectif-test-1.yaml": PerconaServerMongoDBRestore in version "v1" cannot be handled as a PerconaServerMongoDBRestore: strict decoding error: unknown field "spec.selective"

Can you help me understand what is the problem please for the manifest that does not apply? Am I wrong in the file format or the API?

Thx. Fabien

Hi, have you upgraded this environment from a previous version of the operator? it seems the CRD might be outdated. If that is not the case, please open a bug at jira.percona.com and provide the cluster dump and helm list -n namespace

This documentation may be helpful Upgrade MongoDB and the Operator - Percona Operator for MongoDB

Thank you for your answers @Ivan_Groenewold and @Natalia_Marukovich . For this test cluster I destroy it all, an CRDS including. I rebuild it as needed for testing only. With new disks and new data, so also with new CRDS.

So @Ivan_Groenewold , I will certainly do as you said to report the bug in Jira regarding CRD’s.

@Natalia_Marukovich For the story of the Chart Helm version updates, I do indeed encounter problems. For example I tried to go from the bar chart 1.17.1 to 1.18.0 and 1.17.1 to 1.19.0 and I had problems which are problems related to expected values ​​in the values.yaml file like .backup.volumeMounts that I have to add, while if I deploy directly the 1.19.0 I do not need to provide .backup.volumeMounts in the values.yaml file !!!? I will do another test and report back to you the problems I eventually encounter later.