Pods restart after upgrade to Operator 1.21.0

Description:

After upgrading to Percona Server for MongoDB Operator 1.21.0 (Helm chart psmdb-db 1.21.0), MongoDB pods in a non-sharded replica set (percona/percona-server-mongodb:7.0.24-13) are restarting intermittently. The mongod container logs repeated pthread_create failed errors, and the operator frequently reports “FULL CLUSTER CRASH” followed by leader election lost errors. Rolling back to 1.20.0 immediately resolves the problem.

Steps to Reproduce:

  1. Deploy psmdb-operator Helm chart 1.21.0.
  2. Deploy psmdb-db Helm chart 1.21.0 with image percona/percona-server-mongodb:7.0.24-13.
  3. Observe that MongoDB pods start normally.
  4. After several minutes or hours, the pods start failing due to liveness probe checks or internal errors, causing them to restart.
  5. Operator logs show repeated “FULL CLUSTER CRASH” messages and occasionally terminate due to leader election lost.

Version:

  1. Operator: 1.21.0
  2. Charts: 1.21.0
  3. Database image: percona/percona-server-mongodb:7.0.24-13
  4. Cluster type: non-sharded replica set
  5. Previous working version: 1.20.0

Logs:

Mongod container:

exec numactl --interleave=all mongod --bind_ip_all --auth --dbpath=/data/db --port=27017 --replSet=rs0 ...
{"t":{"$date":"2025-10-25T13:26:40.744Z"},"s":"W","c":"CONTROL","id":23321,"ctx":"main","msg":"Option: This name is deprecated. Please use the preferred name instead.","attr":{"deprecatedName":"sslPEMKeyFile","preferredName":"tlsCertificateKeyFile"}}
[1761402410:995451][1:0x7f9144a65640], sweep-server: [WT_VERB_DEFAULT][WARNING]: Session 15 did not run a sweep for 60 minutes.
[1761402410:995491][1:0x7f9144a65640], sweep-server: [WT_VERB_DEFAULT][WARNING]: Session 16 did not run a sweep for 60 minutes.
ERROR(4850900): pthread_create failed
ERROR(4850900): pthread_create failed

Operator:

ERROR   FULL CLUSTER CRASH  error: ping mongo: server selection error: server selection timeout, current topology:
{ Type: ReplicaSetNoPrimary, Servers: [
  { Addr: psmdb-db-rs0-0..., Type: RSSecondary },
  { Addr: psmdb-db-rs0-1..., Type: Unknown, Last error: dial tcp: lookup psmdb-db-rs0-1...: no such host },
  { Addr: psmdb-db-rs0-2..., Type: RSSecondary }
] }
E1027 14:07:15 leaderelection.go:441] Failed to update lock optimistically: context deadline exceeded
E1027 14:07:15 leaderelection.go:448] error retrieving resource lock ...: context deadline exceeded
I1027 14:07:15 leaderelection.go:297] failed to renew lease ...: context deadline exceeded
ERROR setup  problem running manager {"error": "leader election lost"}

Expected Result:

MongoDB pods remain healthy and stable.
Operator maintains leadership and performs normal reconciliations without restarts or “FULL CLUSTER CRASH” events.

Actual Result:

  1. MongoDB pods periodically restart as a result of liveness probe failures or internal errors.
  2. Operator logs “FULL CLUSTER CRASH” and loses leadership.
  3. Replicaset members temporarily lose connectivity (no such host DNS errors).

Additional Information:

  1. Using TLS mode: preferTLS.
  2. The cluster is running on EKS; reverting both the operator and database to Helm chart version 1.20.0 immediately resolves all issues.

Hi @shepz

Thanks for reporting this issue — I’ll try to reproduce it.
From the steps you provided, it’s not clear for me whether the issue occurs only when upgrading from a previous chart, on a fresh deployment or both.
If it happens during an upgrade, could you share the exact steps you followed for the upgrade process?
Also, could you let me know the Kubernetes version of your EKS cluster?

Thank you.

Thanks for your message! :blush:
At first I thought the issue only happened during upgrades, but I redeployed a new database from scratch and still saw the same behavior — so it happens in both cases.

During the upgrade, I updated the operator chart first, then the database chart.
I’m running on EKS 1.30, but I also tried on 1.33 and the issue still occurs.

Thanks again for looking into it!

Hi @shepz did you update CRDs? You need to apply it manually.

I’m upgrading through ArgoCD with syncOptions: ServerSideApply=true.

I can confirm that the CRDs already include the new labels introduced in this release, and the new logcollector container is being deployed in the pods.

I’ve been searching for a solution to this problem for days and came across this topic. I encountered the same problem in my 3-shard setup. Simply downgrading the operator to 1.20 solved the issues.

I use Percona (it was fresh install, not upgrade) on RKE2 + Rancher.

1 Like

Thanks for replying @shepz I tried to reproduce the issue but had no luck on EKS 1.33. Based on your description, I’m using psmdb-operator version 1.21.0 and deploying the psmdb-db chart with.

–version 1.21.0 
–set image.tag=7.0.24-13 
–set tls.mode=preferTLS 
–set sharding.enabled=false

The deployment has been running for over an hour without any issues.

@shepz @Islam_Saka Could you share more details about your CRs or Helm values to help pinpoint the problem? Also, knowing the exact image versions you were using at the time of the error would be helpful.

Thanks!

@Julio_Pasinatto

Percona Server CR

annotations: {}
backup:
  enabled: true
  image:
    repository: percona/percona-backup-mongodb
    tag: 2.11.0
  pitr:
      enabled: true
    storages:
      s3-idrive:
        s3:
          bucket: aps-gp-incremental
          credentialsSecret: aps-gp-incremental-s3
          endpointUrl: https://r6u4.fra2.idrivee2-9.com
          prefix: pbm
          region: frankfurt-2
        type: s3
    tasks:
      - compressionLevel: 6
        compressionType: gzip
        enabled: true
        name: weekly-s3-idrive-incremental
        schedule: 0 1 * * *
        storageName: s3-idrive
        type: incremental
      - compressionLevel: 6
        compressionType: gzip
        enabled: true
        name: weekly-s3-idrive-incremental-base
        retention:
          count: 3
          deleteFromStorage: true
          type: count
        schedule: 0 5 * * 0
        storageName: s3-idrive
        type: incremental-base
crVersion: 1.21.0
enableVolumeExpansion: true
finalizers:
  - percona.com/delete-psmdb-pods-in-order
fullnameOverride: ''
image:
  repository: percona/percona-server-mongodb
  tag: 8.0.12-4
imagePullPolicy: Always
logcollector:
  enabled: true
  image:
    repository: percona/fluentbit
    tag: 4.0.1
  resources:
    requests:
      cpu: 200m
      memory: 100M
multiCluster:
  enabled: false
nameOverride: ''
pause: false
pmm:
  enabled: false
  image:
    repository: percona/pmm-client
    tag: 3.4.1
  serverHost: monitoring-service
replsets:
  rs0:
    affinity:
      antiAffinityTopologyKey: none
    arbiter:
      affinity:
        antiAffinityTopologyKey: none
      enabled: false
      resources:
        limits:
          cpu: 600m
          memory: 1Gi
        requests:
          cpu: 300m
          memory: 1Gi
      size: 1
    expose:
      enabled: false
      type: ClusterIP
    hidden:
      affinity:
        antiAffinityTopologyKey: none
      enabled: false
      podDisruptionBudget:
        maxUnavailable: 1
      resources:
        limits:
          cpu: 600m
          memory: 1Gi
        requests:
          cpu: 300m
          memory: 1Gi
      size: 2
      volumeSpec:
        pvc:
          resources:
            requests:
              storage: 3Gi
    name: rs0
    nonvoting:
      affinity:
        antiAffinityTopologyKey: none
      enabled: false
      podDisruptionBudget:
        maxUnavailable: 1
      resources:
        limits:
          cpu: 600m
          memory: 1Gi
        requests:
          cpu: 300m
          memory: 1Gi
      size: 3
      volumeSpec:
        pvc:
          resources:
            requests:
              storage: 3Gi
    podDisruptionBudget:
      maxUnavailable: 1
    resources:
      limits:
        cpu: 4000m
        memory: 32Gi
      requests:
        cpu: 2000m
        memory: 16Gi
    size: 1
    volumeSpec:
      pvc:
        resources:
          requests:
            storage: 500Gi
        storageClassName: longhorn
    configuration: |
      storage:
        wiredTiger:
          engineConfig:
            cacheSizeGB: 16
  rs1:
    affinity:
      antiAffinityTopologyKey: none
    arbiter:
      affinity:
        antiAffinityTopologyKey: none
      enabled: false
      resources:
        limits:
          cpu: 600m
          memory: 1Gi
        requests:
          cpu: 300m
          memory: 1Gi
      size: 1
    configuration: |
      storage:
        wiredTiger:
          engineConfig:
            cacheSizeGB: 16
    expose:
      enabled: false
      type: ClusterIP
    hidden:
      affinity:
        antiAffinityTopologyKey: none
      enabled: false
      podDisruptionBudget:
        maxUnavailable: 1
      resources:
        limits:
          cpu: 600m
          memory: 1Gi
        requests:
          cpu: 300m
          memory: 1Gi
      size: 2
      volumeSpec:
        pvc:
          resources:
            requests:
              storage: 3Gi
    name: rs1
    nonvoting:
      affinity:
        antiAffinityTopologyKey: none
      enabled: false
      podDisruptionBudget:
        maxUnavailable: 1
      resources:
        limits:
          cpu: 600m
          memory: 1Gi
        requests:
          cpu: 300m
          memory: 1Gi
      size: 3
      volumeSpec:
        pvc:
          resources:
            requests:
              storage: 3Gi
    podDisruptionBudget:
      maxUnavailable: 1
    resources:
      limits:
        cpu: 4000m
        memory: 32Gi
      requests:
        cpu: 2000m
        memory: 16Gi
    size: 1
    volumeSpec:
      pvc:
        resources:
          requests:
            storage: 500Gi
        storageClassName: longhorn
  rs2:
    affinity:
      antiAffinityTopologyKey: none
    arbiter:
      affinity:
        antiAffinityTopologyKey: none
      enabled: false
      resources:
        limits:
          cpu: 600m
          memory: 1Gi
        requests:
          cpu: 300m
          memory: 1Gi
      size: 1
    configuration: |
      storage:
        wiredTiger:
          engineConfig:
            cacheSizeGB: 16
    expose:
      enabled: false
      type: ClusterIP
    hidden:
      affinity:
        antiAffinityTopologyKey: none
      enabled: false
      podDisruptionBudget:
        maxUnavailable: 1
      resources:
        limits:
          cpu: 600m
          memory: 1Gi
        requests:
          cpu: 300m
          memory: 1Gi
      size: 2
      volumeSpec:
        pvc:
          resources:
            requests:
              storage: 3Gi
    name: rs2
    nonvoting:
      affinity:
        antiAffinityTopologyKey: none
      enabled: false
      podDisruptionBudget:
        maxUnavailable: 1
      resources:
        limits:
          cpu: 600m
          memory: 1Gi
        requests:
          cpu: 300m
          memory: 1Gi
      size: 3
      volumeSpec:
        pvc:
          resources:
            requests:
              storage: 3Gi
    podDisruptionBudget:
      maxUnavailable: 1
    resources:
      limits:
        cpu: 4000m
        memory: 32Gi
      requests:
        cpu: 2000m
        memory: 16Gi
    size: 1
    volumeSpec:
      pvc:
        resources:
          requests:
            storage: 500Gi
        storageClassName: longhorn
secrets: {}
sharding:
  balancer:
    enabled: true
  configrs:
    affinity:
      antiAffinityTopologyKey: none
    expose:
      enabled: false
      type: ClusterIP
    podDisruptionBudget:
      maxUnavailable: 1
    resources:
      limits:
        cpu: 1000m
        memory: 4Gi
      requests:
        cpu: 500m
        memory: 2Gi
    size: 3
    volumeSpec:
      pvc:
        resources:
          requests:
            storage: 80Gi
        storageClassName: longhorn
  enabled: true
  mongos:
    affinity:
      antiAffinityTopologyKey: none
    expose:
      enabled: true
      type: NodePort
      nodePort: 32017
    podDisruptionBudget:
      maxUnavailable: 1
    resources:
      limits:
        cpu: 1000m
        memory: 2Gi
      requests:
        cpu: 500m
        memory: 1Gi
    size: 3
unmanaged: false
unsafeFlags:
  backupIfUnhealthy: false
  mongosSize: false
  replsetSize: true
  terminationGracePeriod: false
  tls: false
updateStrategy: SmartUpdate
upgradeOptions:
  apply: disabled
  schedule: 0 2 * * *
  setFCV: false
  versionServiceEndpoint: https://check.percona.com

Percona Operator CR (downgraded and works well)

affinity: {}
annotations: {}
disableTelemetry: false
env:
  resyncPeriod: 5s
fullnameOverride: ''
image:
  pullPolicy: IfNotPresent
  repository: percona/percona-server-mongodb-operator
  tag: 1.20.1
imagePullSecrets: []
labels: {}
logLevel: INFO
logStructured: false
nameOverride: ''
nodeSelector: {}
podAnnotations: {}
podLabels: {}
podSecurityContext: {}
rbac:
  create: true
replicaCount: 1
resources: {}
securityContext: {}
serviceAccount:
  annotations: {}
  create: true
tolerations: []
watchAllNamespaces: false

Can you try reproducing by upgrading the operator 1.21.0 or 1.21.1 (rke2 charts provide two different versions, and both of them produce the same result).

Here is psmdb :

apiVersion: v1
items:
- apiVersion: psmdb.percona.com/v1
  kind: PerconaServerMongoDB
  metadata:
    annotations:
      meta.helm.sh/release-name: psmdb-db
      meta.helm.sh/release-namespace: guillaume1
    creationTimestamp: "2025-10-25T02:44:44Z"
    finalizers:
    - percona.com/delete-psmdb-pods-in-order
    generation: 2
    labels:
      app.kubernetes.io/instance: psmdb-db
      app.kubernetes.io/managed-by: Helm
      app.kubernetes.io/name: psmdb-db
      app.kubernetes.io/version: 1.21.0
      helm.sh/chart: psmdb-db-1.21.0
    name: psmdb-db
    namespace: guillaume1
    resourceVersion: "2649643"
    uid: d280decb-a3bc-42f1-8f08-2c77930b67d0
  spec:
    backup:
      enabled: true
      image: myprivateregistry.azurecr.io/percona/percona-backup-mongodb:2.11.0
      pitr:
        enabled: false
      storages:
        s3-primary:
          s3:
            bucket: test-backup-guillaume-psmdb
            credentialsSecret: psmdb-db-backup-s3
            region: ca-central-1
          type: s3
      tasks:
      - enabled: true
        name: daily-s3-primary
        retention:
          count: 10
          deleteFromStorage: true
          type: count
        schedule: 0 0 * * *
        storageName: s3-primary
    crVersion: 1.21.0
    enableVolumeExpansion: false
    image: myprivateregistry.azurecr.io/percona/percona-server-mongodb:7.0.24-13
    imagePullPolicy: Always
    imagePullSecrets:
    - name: my-docker-credentials
    logcollector:
      enabled: true
      image: myprivateregistry.azurecr.io/percona/fluentbit:4.0.1
      resources:
        requests:
          cpu: 200m
          memory: 100M
    multiCluster:
      enabled: false
    pause: false
    pmm:
      enabled: false
      image: myprivateregistry.azurecr.io/percona/pmm-client:3.4.1
      serverHost: pmm-monitoring-service
    replsets:
    - affinity:
        antiAffinityTopologyKey: kubernetes.io/hostname
      arbiter:
        affinity:
          antiAffinityTopologyKey: kubernetes.io/hostname
        enabled: false
        resources:
          limits:
            cpu: 600m
            memory: 1Gi
          requests:
            cpu: 300m
            memory: 1Gi
        size: 1
      expose:
        enabled: false
        type: ClusterIP
      hidden:
        affinity:
          antiAffinityTopologyKey: kubernetes.io/hostname
        enabled: false
        podDisruptionBudget:
          maxUnavailable: 1
        resources:
          limits:
            cpu: 600m
            memory: 1Gi
          requests:
            cpu: 300m
            memory: 1Gi
        size: 2
        volumeSpec:
          persistentVolumeClaim:
            resources:
              requests:
                storage: 3Gi
      name: rs0
      nonvoting:
        affinity:
          antiAffinityTopologyKey: kubernetes.io/hostname
        enabled: false
        podDisruptionBudget:
          maxUnavailable: 1
        resources:
          limits:
            cpu: 600m
            memory: 1Gi
          requests:
            cpu: 300m
            memory: 1Gi
        size: 3
        volumeSpec:
          persistentVolumeClaim:
            resources:
              requests:
                storage: 3Gi
      podDisruptionBudget:
        maxUnavailable: 1
      resources:
        limits:
          cpu: "1"
          memory: 2Gi
        requests:
          cpu: "1"
          memory: 2Gi
      serviceAccountName: psmdb-operator
      size: 3
      volumeSpec:
        persistentVolumeClaim:
          resources:
            requests:
              storage: 30Gi
    secrets:
      users: psmdb-secrets
    sharding:
      balancer:
        enabled: true
      configsvrReplSet:
        affinity:
          antiAffinityTopologyKey: kubernetes.io/hostname
        expose:
          enabled: false
          type: ClusterIP
        podDisruptionBudget:
          maxUnavailable: 1
        resources:
          limits:
            cpu: 600m
            memory: 1Gi
          requests:
            cpu: 300m
            memory: 1Gi
        size: 3
        volumeSpec:
          persistentVolumeClaim:
            resources:
              requests:
                storage: 3Gi
      enabled: false
      mongos:
        affinity:
          antiAffinityTopologyKey: kubernetes.io/hostname
        expose:
          type: ClusterIP
        podDisruptionBudget:
          maxUnavailable: 1
        resources:
          limits:
            cpu: 600m
            memory: 1Gi
          requests:
            cpu: 300m
            memory: 1Gi
        size: 3
    unmanaged: false
    unsafeFlags:
      backupIfUnhealthy: false
      mongosSize: false
      replsetSize: false
      terminationGracePeriod: false
      tls: false
    updateStrategy: SmartUpdate
    upgradeOptions:
      apply: disabled
      schedule: 0 2 * * *
      setFCV: false
      versionServiceEndpoint: https://check.percona.com
    users:
    - db: admin
      name: app-user
      passwordSecretRef:
        key: password
        name: psmdb-app-user-password
      roles:
      - db: admin
        name: userAdminAnyDatabase
      - db: admin
        name: readWriteAnyDatabase
  status:
    backupConfigHash: 36336c4bc9f8c076712aa4e0d2fad69e458bfa6301093cae4229a8bbf5e36d11
    backupImage: myprivateregistry.azurecr.io/percona/percona-backup-mongodb:2.11.0
    backupVersion: 2.11.0
    conditions:
    - lastTransitionTime: "2025-10-25T02:44:44Z"
      status: "False"
      type: sharding
    - lastTransitionTime: "2025-10-25T02:44:45Z"
      status: "True"
      type: initializing
    - lastTransitionTime: "2025-10-25T02:47:00Z"
      message: 'update PiTR config: create pbm object: create PBM connection to psmdb-db-rs0-0.psmdb-db-rs0.guillaume1.svc.cluster.local:27017,psmdb-db-rs0-1.psmdb-db-rs0.guillaume1.svc.cluster.local:27017,psmdb-db-rs0-2.psmdb-db-rs0.guillaume1.svc.cluster.local:27017:
        create mongo connection: ping: connection() error occurred during connection
        handshake: EOF'
      reason: ErrorReconcile
      status: "True"
      type: error
    - lastTransitionTime: "2025-10-25T02:47:35Z"
      status: "True"
      type: ready
    host: psmdb-db-rs0.guillaume1.svc.cluster.local
    mongoImage: myprivateregistry.azurecr.io/percona/percona-server-mongodb:7.0.24-13
    mongoVersion: 7.0.24-13
    observedGeneration: 2
    ready: 3
    replsets:
      rs0:
        initialized: true
        members:
          psmdb-db-rs0-0:
            name: psmdb-db-rs0-0.psmdb-db-rs0.guillaume1.svc.cluster.local:27017
            state: 1
            stateStr: PRIMARY
          psmdb-db-rs0-1:
            name: psmdb-db-rs0-1.psmdb-db-rs0.guillaume1.svc.cluster.local:27017
            state: 2
            stateStr: SECONDARY
          psmdb-db-rs0-2:
            name: psmdb-db-rs0-2.psmdb-db-rs0.guillaume1.svc.cluster.local:27017
            state: 2
            stateStr: SECONDARY
        ready: 3
        size: 3
        status: ready
    size: 3
    state: ready
kind: List
metadata:
  resourceVersion: ""

Here is operator :

apiVersion: v1
items:
- apiVersion: apps/v1
  kind: Deployment
  metadata:
    annotations:
      deployment.kubernetes.io/revision: "1"
      meta.helm.sh/release-name: psmdb-operator
      meta.helm.sh/release-namespace: guillaume1
    creationTimestamp: "2025-10-25T02:41:03Z"
    generation: 1
    labels:
      app.kubernetes.io/instance: psmdb-operator
      app.kubernetes.io/managed-by: Helm
      app.kubernetes.io/name: psmdb-operator
      app.kubernetes.io/version: 1.21.0
      helm.sh/chart: psmdb-operator-1.21.0
    name: psmdb-operator
    namespace: guillaume1
    resourceVersion: "2564397"
    uid: 1291e872-dd07-4237-9720-b46d70e5ae19
  spec:
    progressDeadlineSeconds: 600
    replicas: 1
    revisionHistoryLimit: 10
    selector:
      matchLabels:
        app.kubernetes.io/instance: psmdb-operator
        app.kubernetes.io/name: psmdb-operator
    strategy:
      rollingUpdate:
        maxSurge: 25%
        maxUnavailable: 25%
      type: RollingUpdate
    template:
      metadata:
        creationTimestamp: null
        labels:
          app.kubernetes.io/instance: psmdb-operator
          app.kubernetes.io/name: psmdb-operator
      spec:
        containers:
        - command:
          - percona-server-mongodb-operator
          env:
          - name: LOG_STRUCTURED
            value: "false"
          - name: LOG_LEVEL
            value: INFO
          - name: WATCH_NAMESPACE
            value: guillaume1
          - name: POD_NAME
            valueFrom:
              fieldRef:
                apiVersion: v1
                fieldPath: metadata.name
          - name: OPERATOR_NAME
            value: percona-server-mongodb-operator
          - name: RESYNC_PERIOD
            value: 5s
          - name: DISABLE_TELEMETRY
            value: "false"
          image: myprivateregistry.azurecr.io/percona/percona-server-mongodb-operator:1.21.0
          imagePullPolicy: IfNotPresent
          livenessProbe:
            failureThreshold: 3
            httpGet:
              path: /healthz
              port: health
              scheme: HTTP
            periodSeconds: 10
            successThreshold: 1
            timeoutSeconds: 1
          name: psmdb-operator
          ports:
          - containerPort: 8080
            name: metrics
            protocol: TCP
          - containerPort: 8081
            name: health
            protocol: TCP
          readinessProbe:
            failureThreshold: 3
            httpGet:
              path: /healthz
              port: health
              scheme: HTTP
            periodSeconds: 10
            successThreshold: 1
            timeoutSeconds: 1
          resources: {}
          securityContext: {}
          terminationMessagePath: /dev/termination-log
          terminationMessagePolicy: File
        dnsPolicy: ClusterFirst
        imagePullSecrets:
        - name: my-docker-credentials
        restartPolicy: Always
        schedulerName: default-scheduler
        securityContext: {}
        serviceAccount: psmdb-operator
        serviceAccountName: psmdb-operator
        terminationGracePeriodSeconds: 30
  status:
    availableReplicas: 1
    conditions:
    - lastTransitionTime: "2025-10-25T02:41:03Z"
      lastUpdateTime: "2025-10-25T02:41:04Z"
      message: ReplicaSet "psmdb-operator-b6459b947" has successfully progressed.
      reason: NewReplicaSetAvailable
      status: "True"
      type: Progressing
    - lastTransitionTime: "2025-10-28T15:26:06Z"
      lastUpdateTime: "2025-10-28T15:26:06Z"
      message: Deployment has minimum availability.
      reason: MinimumReplicasAvailable
      status: "True"
      type: Available
    observedGeneration: 1
    readyReplicas: 1
    replicas: 1
    updatedReplicas: 1
kind: List
metadata:
  resourceVersion: ""

Hi, thanks @shepz @Islam_Saka

We discovered a connection leak issue in version 1.21.0, and we will release a hotfix soon, as discussed in this topic: Percona Operator for MongoDB endlessly spawning connections until OOMKilled
It’s likely that the number of connections keeps increasing until it reaches your configured memory limits, causing the pods to restart.

2 Likes

Hi @shepz @Islam_Saka
Just to inform that we released the hotfix Percona Operator for MongoDB 1.21.1 (2025-10-30) - Percona Operator for MongoDB.
When you get a chance, please try it out and let me know if it works for you.

Hey @Julio_Pasinatto Thanks for the follow-up! I deployed the hotfix in my dev environment — so far, the pods haven’t restarted. I’ll keep an eye on it over the next couple of days and let you know how it goes.

1 Like