Help setting up a Production sharded MongoDB Cluster with 3 replicas per shard

Hi we are looking for some help setting up a production ready cluster sharded with 3 replicas.

However we get the following error given the config below. Looking at this documentation -MongoDB sharding - Percona Operator for MongoDB i can’t figure out what i am mising from out config.

Additionally looking at the error below and the helm values percona-helm-charts/charts/psmdb-db at main · percona/percona-helm-charts · GitHub - replsets.cfg.members does not seem to be a valid property we can adjust

Bit confused what we are missing.

2025-02-24T12:15:17.143Z ERROR failed to update cluster status {“controller”: “psmdb-controller”, “controllerGroup”: “psmdb.percona.com”, “controllerKind”: “PerconaServerMongoDB”, “PerconaServerMongoDB”: {“name”:“psmdb-default-sharded”,“namespace”:“mongodb-sharded”}, “namespace”: “mongodb-sharded”, “name”: “psmdb-default-sharded”, “reconcileID”: “19eda940-1419-46ad-859a-270a2e21109b”, “replset”: “rs0”, “error”: “write status: PerconaServerMongoDB.psmdb.percona.com "psmdb-default-sharded" is invalid: status.replsets.cfg.members: Invalid value: "object": replsets.cfg.members in body must be of type array: "object"”, “errorVerbose”: “PerconaServerMongoDB.psmdb.percona.com "psmdb-default-sharded" is invalid: status.replsets.cfg.members: Invalid value: "object": replsets.cfg.members in body must be of type array: "object"\nwrite status\ngithub.com/percona/percona-server-mongodb-operator/pkg/controller/perconaservermongodb.(*ReconcilePerconaServerMongoDB).writeStatus\n\t/go/src/github.com/percona/percona-server-mongodb-operator/pkg/controller/perconaservermongodb/status.go:269\ngithub.com/percona/percona-server-mongodb-operator/pkg/controller/perconaservermongodb.(*ReconcilePerconaServerMongoDB).updateStatus\n\t/go/src/github.com/percona/percona-server-mongodb-operator/pkg/controller/perconaservermongodb/status.go:238\ngithub.com/percona/percona-server-mongodb-operator/pkg/controller/perconaservermongodb.(*ReconcilePerconaServerMongoDB).Reconcile.func1\n\t/go/src/github.com/percona/percona-server-mongodb-operator/pkg/controller/perconaservermongodb/psmdb_controller.go:266\ngithub.com/percona/percona-server-mongodb-operator/pkg/controller/perconaservermongodb.(*ReconcilePerconaServerMongoDB).Reconcile\n\t/go/src/github.com/percona/percona-server-mongodb-operator/pkg/controller/perconaservermongodb/psmdb_controller.go:470\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[…]).Reconcile\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.3/pkg/internal/controller/controller.go:116\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[…]).reconcileHandler\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.3/pkg/internal/controller/controller.go:303\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[…]).processNextWorkItem\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.3/pkg/internal/controller/controller.go:263\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[…]).Start.func2.2\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.3/pkg/internal/controller/controller.go:224\nruntime.goexit\n\t/usr/local/go/src/runtime/asm_amd64.s:1700”}

finalizers:
  - delete-psmdb-pods-in-order
nameOverride: ${db_name}
unsafeFlags:
  replsetSize: true
upgradeOptions:
  apply: disabled
image:
  repository: percona/percona-server-mongodb
  tag: 8.0.4-1-multi
enableVolumeExpansion: true
replsets:
  rs0:
    name: rs0
    size: 3
    tolerations:
      - key: "karpenter/mongodb"
        operator: "Exists"
        effect: "NoSchedule"
    nodeSelector:
      karpenter-node-pool: mongodb
    resources: 
      limits:
        cpu: 2
        memory: 4Gi
      requests:
        cpu: 300m
        memory: 500M
    expose:
      enabled: false
    volumeSpec:
      pvc:
        storageClassName: mongodb
        resources:
          requests:
            storage: 100Gi
  rs1:
    name: rs1
    size: 3
    tolerations:
      - key: "karpenter/mongodb"
        operator: "Exists"
        effect: "NoSchedule"
    nodeSelector:
      karpenter-node-pool: mongodb
    resources: 
      limits:
        cpu: 2
        memory: 4Gi
      requests:
        cpu: 300m
        memory: 500M
    expose:
      enabled: false
    volumeSpec:
      pvc:
        storageClassName: mongodb
        resources:
          requests:
            storage: 100Gi
  rs2:
    name: rs2
    size: 3
    tolerations:
      - key: "karpenter/mongodb"
        operator: "Exists"
        effect: "NoSchedule"
    nodeSelector:
      karpenter-node-pool: mongodb
    resources: 
      limits:
        cpu: 2
        memory: 4Gi
      requests:
        cpu: 300m
        memory: 500M
    expose:
      enabled: false
    volumeSpec:
      pvc:
        storageClassName: mongodb
        resources:
          requests:
            storage: 100Gi
backup:
  enabled: false
sharding:
  enabled: true

Looks similar to this one:
https://forums.percona.com/t/error-upgrading-to-1-19-failed-to-update-cluster-status/36422/4

Interesting, maybe you could shed some light on what the best practise is here.

We have a test eks cluser - where i originally installed just one operator + db.

  1. namespace: mongodb - percona operator + db: version 1.18

I am now looking at testing out a sharded clsuter and have installed it onto

  1. namespace: mongodb-sharded - percona operator + db: version 1.19.1

So could it be that despite them being in separate namespaces, the crds originally installed in 1. is being used for 2. What is the best way to keep them separate?

psmdb-operator                  mongodb-sharded         1               2025-02-24 11:42:54.263168754 +0000 UTC deployed        psmdb-operator-1.19.1                  1.19.1
psmdb-operator                  mongodb                 1               2025-01-02 12:07:12.658132847 +0000 UTC deployed        psmdb-operator-1.18.0                  1.18.0

aside: is there a seperate helm chart for crds?