Percona MongoDB Replica set configuration

I have a small Percona MongoDB database running on AWS. I have stopped that instance, attached a VM to the volume of that instance and copied all the files of database. Then, I added all these files to Azure and created another Percona MongoDB instance. It starts successfully, however, after some time (~3mins) it shuts down.

Looking into the Operator logs, I can see this:

2024-10-04T14:15:08.425Z        INFO    initiating replset      {"controller": "psmdb-controller", "object": {"name":"perconamongodbcluster","namespace":"mongodb-test"}, "namespace": "mongodb-test", "name": "perconamongodbcluster", "reconcileID": "f9172ffe-ec97-459e-9a38-79313941a919", "replset": "rs0", "pod": "perconamongodbcluster-rs0-0"}
2024-10-04T14:15:20.963Z        ERROR   failed to reconcile cluster     {"controller": "psmdb-controller", "object": {"name":"perconamongodbcluster","namespace":"mongodb-test"}, "namespace": "mongodb-test", "name": "perconamongodbcluster", "reconcileID": "f9172ffe-ec97-459e-9a38-79313941a919", "replset": "rs0", "error": "handleReplsetInit: exec add admin user: command terminated with exit code 1 / Current Mongosh Log ID:\t66fff876b571db865b127c2b\nConnecting to:\t\tmongodb://127.0.0.1:27017/?directConnection=true&serverSelectionTimeoutMS=2000&tls=true&tlsCertificateKeyFile=%2Ftmp%2Ftls.pem&tlsAllowInvalidCertificates=true&tlsCAFile=%2Fetc%2Fmongodb-ssl%2Fca.crt&appName=mongosh+2.1.5\nUsing MongoDB:\t\t7.0.8-5\nUsing Mongosh:\t\t2.1.5\nmongosh 2.3.1 is available for download: https://www.mongodb.com/try/download/shell\n\nFor mongosh info see: https://docs.mongodb.com/mongodb-shell/\n\n / MongoServerError: Command createUser requires authentication\n",

So it seems that Authentication is missing and the Operator is not able to start the replicaset. When I look at the logs of Pod, I see strange things as well:

{"t":{"$date":"2024-10-04T14:21:38.859+00:00"},"s":"W",  "c":"REPL",     "id":21405,   "ctx":"ReplCoord-0","msg":"Locally stored replica set configuration does not have a valid entry for the current node; waiting for reconfig or remote heartbeat","attr":{"error":{"code":74,"codeName":"NodeNotFound","errmsg":"No host described in new configuration with {version: 44, term: 13} for replica set rs0 maps to this node"},"localConfig":{"_id":"rs0","version":44,"term":13,"members":[{"_id":0,"host":"perconamongodbcluster-rs0-0.perconamongodbcluster-rs0.perconamongodb.svc.cluster.local:27017","arbiterOnly":false,"buildIndexes":true,"hidden":false,"priority":2,"tags":{"podName":"perconamongodbcluster-rs0-0","serviceName":"perconamongodbcluster","nodeName":"ip-10-20-23-65.eu-central-1.compute.internal"},"secondaryDelaySecs":0,"votes":1}],"protocolVersion":1,"writeConcernMajorityJournalDefault":true,"settings":{"chainingAllowed":true,"heartbeatIntervalMillis":2000,"heartbeatTimeoutSecs":10,"electionTimeoutMillis":10000,"catchUpTimeoutMillis":-1,"catchUpTakeoverDelayMillis":30000,"getLastErrorModes":{},"getLastErrorDefaults":{"w":1,"wtimeout":0},"replicaSetId":{"$oid":"66df0cb532b0d77ab0ea8b13"}}}}}

Locally stored replica set configuration does not have a valid entry for the current node - it seems that the configuration of Replica set is not correct.

The question is - how can I reset the configuration completely and let the mongodb reconfigure itself? I have been trying to do that as databaseAdmin user, however, no success. I have been trying to drop local database - can’t find user who has sufficient rights to do so. What options do I have?

rs.conf() output here:

{
  _id: 'rs0',
  version: 45,
  term: 13,
  members: [
    {
      _id: 0,
      host: 'perconamongodbcluster-rs0-0.perconamongodbcluster-rs0.perconamongodb.svc.cluster.local:27017',
      arbiterOnly: false,
      buildIndexes: true,
      hidden: false,
      priority: 2,
      tags: [Object],
      secondaryDelaySecs: Long('0'),
      votes: 1
    }
  ],
  protocolVersion: Long('1'),
  writeConcernMajorityJournalDefault: true,
  settings: {
    chainingAllowed: true,
    heartbeatIntervalMillis: 2000,
    heartbeatTimeoutSecs: 10,
    electionTimeoutMillis: 10000,
    catchUpTimeoutMillis: -1,
    catchUpTakeoverDelayMillis: 30000,
    getLastErrorModes: {},
    getLastErrorDefaults: { w: 1, wtimeout: 0 },
    replicaSetId: ObjectId('66df0cb532b0d77ab0ea8b13')
  }
}

rs.conf().members:

[
  {
    _id: 0,
    host: 'perconamongodbcluster-rs0-0.perconamongodbcluster-rs0.perconamongodb.svc.cluster.local:27017',
    arbiterOnly: false,
    buildIndexes: true,
    hidden: false,
    priority: 2,
    tags: {
        podName: 'perconamongodbcluster-rs0-0',
        serviceName: 'perconamongodbcluster',
        nodeName: 'ip-10-20-23-65.eu-central-1.compute.internal'
    },
    secondaryDelaySecs: Long('0'),
    votes: 1
  }
]

as you can see from the nodeName in config.members, this clearly indicates that the ReplicaSet still has AWS configuration: ip-10-20-23-65.eu-central-1.compute.internal - this is AWS node pattern.

Hello @Gvidas_Pranauskas ,

here are few things:

  1. Don’t mind the tag ip-10-20-23-65.eu-central-1.compute.internal. It is inherited from the data you restored from the backup. host is what should be important for you.
  2. As I see the authentication issue - did you also restore the k8s secret with users? You need to have the secret in place to ensure the consistency. Otherwise, your data has system user with password X, where Operator thinks it should be Y.
    Take the secret from you AWS cluster (like kubectl get secret my-cluster-name-secrets) and make sure it is also created in Azure.

Hello @Sergey_Pronin,
thanks for your help! Indeed, the problem was with secrets. There is a key-file secret, which I did not specify to reuse, so the database couldn’t start successfully.

However, I have additional question. Assuming those two databases are successfully running independently, is there a possibility to configure them as multi-cluster deployment? I have tried to do that, exposing both of the databases, followed the documentation, exposed the databases via cloud load balancers, but it seems that the databases sync internal configuration and use internal DNS resolution and service discovery, which in order means that the requests fail. Do you have any specific recommendations or do you know, whether this is even possible?

It is possible. But to expose your databases you would need to use split horizons.
You can read more about it here: https://www.percona.com/blog/beyond-the-horizon-mastering-percona-server-for-mongodb-exposure-in-kubernetes-part-one/

It is not the easy thing, but can be done. When done right - it just works :slight_smile:

Hey @Sergey_Pronin,
I have managed to use split-horizon DNS on both of those databases, however, when I add the external nodes, it doesn’t seem to work. When one of the database is added as externalNodes to other, in logs, I can see this error:

{"t":{"$date":"2024-10-11T09:21:18.629+00:00"},"s":"I",  "c":"REPL_HB",  "id":23974,   "ctx":"ReplCoord-19","msg":"Heartbeat failed after max retries","attr":{"target":<REDACTED>.elb.eu-central-1.amazonaws.com:27017","maxHeartbeatRetries":2,"error":{"code":93,"codeName":"InvalidReplicaSetConfig","errmsg":"replica set IDs do not match, ours: 6708e910695e88f83de619cc; remote node's: 6707967540c1f43ce61f33ff"}}}

I would like to mention that both of the replicasets have data which is identical, but I assume the replicasets were initiated independently, thus having different ids. Any ideas on what might be wrong with this?

@Gvidas_Pranauskas so at least they talk to each other.
Now we need to dig deeper and see your custom resource manifest. I’m mostly interested in how you add nodes and how you provision those. Can you tell me a bit more?

on the Main replication site, I have added externalNodes as such:

    externalNodes:
    - host: <REDACTED>.elb.eu-central-1.amazonaws.com
      port: 27017
      votes: 0
      priority: 0
    - host: <REDACTED>.elb.eu-central-1.amazonaws.com
      port: 27017
      votes: 0
      priority: 0
    - host: <REDACTED>.elb.eu-central-1.amazonaws.com
      port: 27017
      votes: 0
      priority: 0

the full deploy/cr.yaml file is here:

# This is a YAML-formatted file.
# Declare variables to be passed into your templates.

# Platform type: kubernetes, openshift
# platform: kubernetes

# Cluster DNS Suffix
# clusterServiceDNSSuffix: svc.cluster.local
clusterServiceDNSMode: "Internal"

finalizers:
## Set this if you want that operator deletes the primary pod last
  - delete-psmdb-pods-in-order
## Set this if you want to delete database persistent volumes on cluster deletion
#  - delete-psmdb-pvc

nameOverride: ""
fullnameOverride: ""

crVersion: 1.17.0
pause: false
unmanaged: false
unsafeFlags:
  tls: false
  replsetSize: false
  mongosSize: false
  terminationGracePeriod: false
  backupIfUnhealthy: false

annotations: {}

# ignoreAnnotations:
#   - service.beta.kubernetes.io/aws-load-balancer-backend-protocol
# ignoreLabels:
#   - rack
multiCluster:
  enabled: false
  # DNSSuffix: svc.clusterset.local
updateStrategy: SmartUpdate
upgradeOptions:
  versionServiceEndpoint: https://check.percona.com
  apply: disabled
  schedule: "0 2 * * *"
  setFCV: false

image:
  repository: percona/percona-server-mongodb
  tag: 7.0.8-5

imagePullPolicy: Always
# imagePullSecrets: []
# initImage:
#   repository: percona/percona-server-mongodb-operator
#   tag: 1.14.0
# initContainerSecurityContext: {}
# tls:
#   mode: preferTLS
#   # 90 days in hours
#   certValidityDuration: 2160h
#   allowInvalidCertificates: true
#   issuerConf:
#     name: special-selfsigned-issuer
#     kind: ClusterIssuer
#     group: cert-manager.io
secrets:
  # If you set users secret here the operator will use existing one or generate random values
  # If not set the operator generates the default secret with name <cluster_name>-secrets
  users: perconamongodbcluster-secrets
  encryptionKey: perconamongodbcluster-mongodb-encryption-key
  ssl: perconamongodbcluster-ssl
  sslInternal: perconamongodbcluster-ssl-internal
  # key: perconamongodbcluster-mongodb-keyfile
  # vault: my-cluster-name-vault
  # ldapSecret: my-ldap-secret
  # sse: my-cluster-name-sse

pmm:
  enabled: false
  image:
    repository: percona/pmm-client
    tag: 2.41.2
  serverHost: ip-10-20-221-233.eu-central-1.compute.internal

replsets:
  rs0:
    name: rs0
    size: 3
    # terminationGracePeriodSeconds: 300
    externalNodes:
    - host: <REDACTED>.elb.eu-central-1.amazonaws.com
      port: 27017
      votes: 0
      priority: 0
    - host: <REDACTED>.elb.eu-central-1.amazonaws.com
      port: 27017
      votes: 0
      priority: 0
    - host: <REDACTED>.elb.eu-central-1.amazonaws.com
      port: 27017
      votes: 0
      priority: 0
    # configuration: |
    #   operationProfiling:
    #     mode: slowOp
    #   systemLog:
    #     verbosity: 1
    # serviceAccountName: percona-server-mongodb-operator
    # topologySpreadConstraints:
    #   - labelSelector:
    #       matchLabels:
    #         app.kubernetes.io/name: percona-server-mongodb
    #     maxSkew: 1
    #     topologyKey: kubernetes.io/hostname
    #     whenUnsatisfiable: DoNotSchedule
    affinity:
      antiAffinityTopologyKey: "none"
      # advanced:
      #   podAffinity:
      #     requiredDuringSchedulingIgnoredDuringExecution:
      #     - labelSelector:
      #         matchExpressions:
      #         - key: security
      #           operator: In
      #           values:
      #           - S1
      #       topologyKey: failure-domain.beta.kubernetes.io/zone
    # priorityClass: ""
    # annotations: {}
    # labels: {}
    # podSecurityContext: {}
    # containerSecurityContext: {}
    nodeSelector:
      agentpool: stable
    # livenessProbe:
    #   failureThreshold: 4
    #   initialDelaySeconds: 60
    #   periodSeconds: 30
    #   timeoutSeconds: 10
    #   startupDelaySeconds: 7200
    # readinessProbe:
    #   failureThreshold: 8
    #   initialDelaySeconds: 10
    #   periodSeconds: 3
    #   successThreshold: 1
    #   timeoutSeconds: 2
    # runtimeClassName: image-rc
    # storage:
    #   engine: wiredTiger
    #   wiredTiger:
    #     engineConfig:
    #       cacheSizeRatio: 0.5
    #       directoryForIndexes: false
    #       journalCompressor: snappy
    #     collectionConfig:
    #       blockCompressor: snappy
    #     indexConfig:
    #       prefixCompression: true
    #   inMemory:
    #     engineConfig:
    #        inMemorySizeRatio: 0.5
    # sidecars:
    # - image: busybox
    #   command: ["/bin/sh"]
    #   args: ["-c", "while true; do echo echo $(date -u) 'test' >> /dev/null; sleep 5;done"]
    #   name: rs-sidecar-1
    #   volumeMounts:
    #     - mountPath: /volume1
    #       name: sidecar-volume-claim
    #     - mountPath: /secret
    #       name: sidecar-secret
    #     - mountPath: /configmap
    #       name: sidecar-config
    # sidecarVolumes:
    # - name: sidecar-secret
    #   secret:
    #     secretName: mysecret
    # - name: sidecar-config
    #   configMap:
    #     name: myconfigmap
    # sidecarPVCs:
    # - apiVersion: v1
    #   kind: PersistentVolumeClaim
    #   metadata:
    #     name: sidecar-volume-claim
    #   spec:
    #     resources:
    #       requests:
    #         storage: 1Gi
    #     volumeMode: Filesystem
    #     accessModes:
    #       - ReadWriteOnce
    podDisruptionBudget:
      maxUnavailable: 1
    expose:
      enabled: true
      exposeType: LoadBalancer
      # loadBalancerSourceRanges:
      #   - 10.0.0.0/8
      # serviceAnnotations:
        # service.beta.kubernetes.io/aws-load-balancer-backend-protocol: http
        # service.beta.kubernetes.io/aws-load-balancer-scheme: "internet-facing"
        # service.beta.kubernetes.io/aws-load-balancer-target-node-label: "role=database"
      # serviceLabels:
      #   some-label: some-key
    # schedulerName: ""
    resources:
      limits:
        cpu: "300m"
        memory: "0.5G"
      requests:
        cpu: "300m"
        memory: "0.5G"
    volumeSpec:
      # emptyDir: {}
      # hostPath:
      #   path: /data
      #   type: Directory
      pvc:
        # annotations:
        #   volume.beta.kubernetes.io/storage-class: example-hostpath
        # labels:
        #   rack: rack-22
        storageClassName: "container-mongodb"
        # accessModes: [ "ReadWriteOnce" ]
        resources:
          requests:
            storage: 3Gi
    # hostAliases:
    # - ip: "10.10.0.2"
    #   hostnames:
    #   - "host1"
    #   - "host2"
    nonvoting:
      enabled: false
      # podSecurityContext: {}
      # containerSecurityContext: {}
      size: 3
      # configuration: |
      #   operationProfiling:
      #     mode: slowOp
      #   systemLog:
      #     verbosity: 1
      # serviceAccountName: percona-server-mongodb-operator
      affinity:
        antiAffinityTopologyKey: "kubernetes.io/hostname"
        # advanced:
        #   podAffinity:
        #     requiredDuringSchedulingIgnoredDuringExecution:
        #     - labelSelector:
        #         matchExpressions:
        #         - key: security
        #           operator: In
        #           values:
        #           - S1
        #       topologyKey: failure-domain.beta.kubernetes.io/zone
      tolerations:
      - key: "dedicated"
        operator: "Equal"
        value: "database"
        effect: "NoSchedule"
      # priorityClass: ""
      # annotations: {}
      # labels: {}
      nodeSelector:
        role: database
      podDisruptionBudget:
        maxUnavailable: 1
      resources:
        limits:
          cpu: "300m"
          memory: "0.4G"
        requests:
          cpu: "300m"
          memory: "0.1G"
      volumeSpec:
        # emptyDir: {}
        # hostPath:
        #   path: /data
        #   type: Directory
        pvc:
          # annotations:
          #   volume.beta.kubernetes.io/storage-class: example-hostpath
          # labels:
          #   rack: rack-22
          # storageClassName: standard
          # accessModes: [ "ReadWriteOnce" ]
          resources:
            requests:
              storage: 3Gi
    arbiter:
      enabled: false
      size: 1
      # serviceAccountName: percona-server-mongodb-operator
      affinity:
        antiAffinityTopologyKey: "kubernetes.io/hostname"
        # advanced:
        #   podAffinity:
        #     requiredDuringSchedulingIgnoredDuringExecution:
        #     - labelSelector:
        #         matchExpressions:
        #         - key: security
        #           operator: In
        #           values:
        #           - S1
        #       topologyKey: failure-domain.beta.kubernetes.io/zone
      # tolerations: []
      # priorityClass: ""
      # annotations: {}
      # labels: {}
      nodeSelector:
        role: database

sharding:
  enabled: false
  balancer:
    enabled: true

  configrs:
    size: 3
    # terminationGracePeriodSeconds: 300
    # externalNodes:
    # - host: 34.124.76.90
    # - host: 34.124.76.91
    #   port: 27017
    #   votes: 0
    #   priority: 0
    # - host: 34.124.76.92
    # configuration: |
    #   operationProfiling:
    #     mode: slowOp
    #   systemLog:
    #     verbosity: 1
    # serviceAccountName: percona-server-mongodb-operator
    # topologySpreadConstraints:
    #   - labelSelector:
    #       matchLabels:
    #         app.kubernetes.io/name: percona-server-mongodb
    #     maxSkew: 1
    #     topologyKey: kubernetes.io/hostname
    #     whenUnsatisfiable: DoNotSchedule
    affinity:
      antiAffinityTopologyKey: "kubernetes.io/hostname"
      # advanced:
      #   podAffinity:
      #     requiredDuringSchedulingIgnoredDuringExecution:
      #     - labelSelector:
      #         matchExpressions:
      #         - key: security
      #           operator: In
      #           values:
      #           - S1
      #       topologyKey: failure-domain.beta.kubernetes.io/zone
    # tolerations: []
    # priorityClass: ""
    # annotations: {}
    # labels: {}
    # podSecurityContext: {}
    # containerSecurityContext: {}
    nodeSelector:
      agentpool: stable
    # livenessProbe: {}
    # readinessProbe: {}
    # runtimeClassName: image-rc
    # sidecars:
    # - image: busybox
    #   command: ["/bin/sh"]
    #   args: ["-c", "while true; do echo echo $(date -u) 'test' >> /dev/null; sleep 5;done"]
    #   name: rs-sidecar-1
    #   volumeMounts:
    #     - mountPath: /volume1
    #       name: sidecar-volume-claim
    # sidecarPVCs: []
    # sidecarVolumes: []
    podDisruptionBudget:
      maxUnavailable: 1
    expose:
      enabled: false
      exposeType: ClusterIP
      # loadBalancerSourceRanges:
      #   - 10.0.0.0/8
      # serviceAnnotations:
      #   service.beta.kubernetes.io/aws-load-balancer-backend-protocol: http
      # serviceLabels:
      #   some-label: some-key
    resources:
      limits:
        cpu: "300m"
        memory: "0.5G"
      requests:
        cpu: "300m"
        memory: "0.5G"
    volumeSpec:
      # emptyDir: {}
      # hostPath:
      #   path: /data
      #   type: Directory
      pvc:
        # annotations:
        #   volume.beta.kubernetes.io/storage-class: example-hostpath
        # labels:
        #   rack: rack-22
        # storageClassName: standard
        # accessModes: [ "ReadWriteOnce" ]
        resources:
          requests:
            storage: 3Gi
    # hostAliases:
    # - ip: "10.10.0.2"
    #   hostnames:
    #   - "host1"
    #   - "host2"

  mongos:
    size: 2
    # terminationGracePeriodSeconds: 300
    # configuration: |
    #   systemLog:
    #     verbosity: 1
    # serviceAccountName: percona-server-mongodb-operator
    # topologySpreadConstraints:
    #   - labelSelector:
    #       matchLabels:
    #         app.kubernetes.io/name: percona-server-mongodb
    #     maxSkew: 1
    #     topologyKey: kubernetes.io/hostname
    #     whenUnsatisfiable: DoNotSchedule
    affinity:
      antiAffinityTopologyKey: "none"
      # advanced:
      #   podAffinity:
      #     requiredDuringSchedulingIgnoredDuringExecution:
      #     - labelSelector:
      #         matchExpressions:
      #         - key: security
      #           operator: In
      #           values:
      #           - S1
      #       topologyKey: failure-domain.beta.kubernetes.io/zone
    # tolerations: []
    # priorityClass: ""
    # annotations: {}
    # labels: {}
    # podSecurityContext: {}
    # containerSecurityContext: {}
    nodeSelector:
      agentpool: stable
    # livenessProbe: {}
    # readinessProbe: {}
    # runtimeClassName: image-rc
    # sidecars:
    # - image: busybox
    #   command: ["/bin/sh"]
    #   args: ["-c", "while true; do echo echo $(date -u) 'test' >> /dev/null; sleep 5;done"]
    #   name: rs-sidecar-1
    #   volumeMounts:
    #     - mountPath: /volume1
    #       name: sidecar-volume-claim
    # sidecarPVCs: []
    # sidecarVolumes: []
    podDisruptionBudget:
      maxUnavailable: 1
    resources:
      limits:
        cpu: "300m"
        memory: "0.5G"
      requests:
        cpu: "300m"
        memory: "0.5G"
    expose:
      exposeType: ClusterIP
      # servicePerPod: true
      # loadBalancerSourceRanges:
      #   - 10.0.0.0/8
      # serviceAnnotations:
      #   service.beta.kubernetes.io/aws-load-balancer-backend-protocol: http
      # serviceLabels:
      #   some-label: some-key
      # nodePort: 32017
    # auditLog:
    #   destination: file
    #   format: BSON
    #   filter: '{}'
    # hostAliases:
    # - ip: "10.10.0.2"
    #   hostnames:
    #   - "host1"
    #   - "host2"

backup:
  enabled: false
  image:
    repository: percona/percona-backup-mongodb
    tag: 2.4.1
  # annotations:
  #   iam.amazonaws.com/role: role-arn
  # podSecurityContext: {}
  # containerSecurityContext: {}
  # resources:
  #   limits:
  #     cpu: "300m"
  #     memory: "0.5G"
  #   requests:
  #     cpu: "300m"
  #     memory: "0.5G"
  storages: {}
    # s3-us-west:
    #   type: s3
    #   s3:
    #     bucket: S3-BACKUP-BUCKET-NAME-HERE
    #     credentialsSecret: my-cluster-name-backup-s3
    #     serverSideEncryption:
    #       kmsKeyID: 1234abcd-12ab-34cd-56ef-1234567890ab
    #       sseAlgorithm: aws:kms
    #       sseCustomerAlgorithm: AES256
    #       sseCustomerKey: Y3VzdG9tZXIta2V5
    #     retryer:
    #       numMaxRetries: 3
    #       minRetryDelay: 30ms
    #       maxRetryDelay: 5m
    #     region: us-west-2
    #     prefix: ""
    #     uploadPartSize: 10485760
    #     maxUploadParts: 10000
    #     storageClass: STANDARD
    #     insecureSkipTLSVerify: false
    # minio:
    #   type: s3
    #   s3:
    #     bucket: MINIO-BACKUP-BUCKET-NAME-HERE
    #     region: us-east-1
    #     credentialsSecret: my-cluster-name-backup-minio
    #     endpointUrl: http://minio.psmdb.svc.cluster.local:9000/minio/
    #     prefix: ""
    #   azure-blob:
    #     type: azure
    #     azure:
    #       container: CONTAINER-NAME
    #       prefix: PREFIX-NAME
    #       endpointUrl: https://accountName.blob.core.windows.net
    #       credentialsSecret: SECRET-NAME
  pitr:
    enabled: false
    oplogOnly: false
    # oplogSpanMin: 10
    # compressionType: gzip
    # compressionLevel: 6
  # configuration:
  #   backupOptions:
  #     priority:
  #       "localhost:28019": 2.5
  #       "localhost:27018": 2.5
  #     timeouts:
  #       startingStatus: 33
  #     oplogSpanMin: 10
  #   restoreOptions:
  #     batchSize: 500
  #     numInsertionWorkers: 10
  #     numDownloadWorkers: 4
  #     maxDownloadBufferMb: 0
  #     downloadChunkMb: 32
  #     mongodLocation: /usr/bin/mongo
  #     mongodLocationMap:
  #       "node01:2017": /usr/bin/mongo
  #       "node03:27017": /usr/bin/mongo
  tasks: []
  # - name: daily-s3-us-west
  #   enabled: true
  #   schedule: "0 0 * * *"
  #   keep: 3
  #   storageName: s3-us-west
  #   compressionType: gzip
  # - name: weekly-s3-us-west
  #   enabled: false
  #   schedule: "0 0 * * 0"
  #   keep: 5
  #   storageName: s3-us-west
  #   compressionType: gzip
  # - name: weekly-s3-us-west-physical
  #   enabled: false
  #   schedule: "0 5 * * 0"
  #   keep: 5
  #   type: physical
  #   storageName: s3-us-west
  #   compressionType: gzip
  #   compressionLevel: 6

Don’t mind the comments :slight_smile: I am using Helm charts as well to deploy PSMDB.