Error during an attempt to restore a physical backup of MongoDB

Description:

I deployed a MongoDB cluster using the Percona MongoDB Operator (version 1.19.1) with scheduled physical backups. When I attempt to restore on the same cluster, I encounter an error related to the Kubernetes StatefulSet. The cluster then seems to revert to its previous state.

Steps to Reproduce:

Deploy cluster using mongodb operator (version 1.19.1),

apiVersion: psmdb.percona.com/v1
kind: PerconaServerMongoDB
metadata:
  finalizers:
    - percona.com/delete-psmdb-pods-in-order
  labels:
    app.kubernetes.io/instance: staging-mongodb
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/name: psmdb-db
    app.kubernetes.io/version: 1.19.1
    argocd.argoproj.io/instance: staging-mongodb
    helm.sh/chart: psmdb-db-1.19.1
  name: staging-mongodb-psmdb
  namespace: mongodb
spec:
  backup:
    enabled: true
    image: percona/percona-backup-mongodb:2.8.0-multi
    pitr:
      enabled: false
    storages:
      s3-backups:
        s3: [REDACTED]
        type: s3
    tasks:
      - compressionLevel: 6
        compressionType: gzip
        enabled: true
        keep: 7
        name: physical-backups
        schedule: 0 4 * * *
        storageName: s3-backups
        type: physical
      - compressionLevel: 6
        compressionType: gzip
        enabled: true
        keep: 7
        name: logical-backups
        schedule: 0 5 * * *
        storageName: s3-backups
    volumeMounts: []
  crVersion: 1.19.1
  enableVolumeExpansion: false
  image: percona/percona-server-mongodb:7.0.15-9-multi
  imagePullPolicy: Always
  multiCluster:
    enabled: false
  pause: false
  pmm:
    enabled: false
    image: percona/pmm-client:2.44.0
    serverHost: monitoring-service
  replsets:
    - affinity:
        antiAffinityTopologyKey: kubernetes.io/hostname
      annotations: {}
      arbiter:
        affinity:
          antiAffinityTopologyKey: kubernetes.io/hostname
        enabled: false
        size: 1
      expose:
        enabled: false
        type: ClusterIP
      name: rs0
      nonvoting:
        affinity:
          antiAffinityTopologyKey: kubernetes.io/hostname
        enabled: false
        podDisruptionBudget:
          maxUnavailable: 1
        resources:
          limits:
            cpu: 300m
            memory: 0.5G
          requests:
            cpu: 300m
            memory: 0.5G
        size: 3
        volumeSpec:
          persistentVolumeClaim:
            resources:
              requests:
                storage: 3Gi
      podDisruptionBudget:
        maxUnavailable: 1
      resources:
        limits:
          cpu: 300m
          memory: 500M
        requests:
          cpu: 300m
          memory: 500M
      size: 3
      topologySpreadConstraints: []
      volumeSpec:
        persistentVolumeClaim:
          resources:
            requests:
              storage: 5Gi
  secrets:
    users: staging-mongodb-psmdb-secrets
  sharding:
    balancer:
      enabled: true
    configsvrReplSet:
      affinity:
        antiAffinityTopologyKey: kubernetes.io/hostname
      expose:
        enabled: false
        type: ClusterIP
      podDisruptionBudget:
        maxUnavailable: 1
      resources:
        limits:
          cpu: 500m
          memory: 4G
        requests:
          cpu: 300m
          memory: 0.5G
      size: 3
      volumeSpec:
        persistentVolumeClaim:
          resources:
            requests:
              storage: 3Gi
    enabled: true
    mongos:
      affinity:
        antiAffinityTopologyKey: kubernetes.io/hostname
      expose:
        type: ClusterIP
      podDisruptionBudget:
        maxUnavailable: 1
      resources:
        limits:
          cpu: 300m
          memory: 0.5G
        requests:
          cpu: 300m
          memory: 0.5G
      size: 3
  unmanaged: false
  unsafeFlags:
    backupIfUnhealthy: false
    mongosSize: false
    replsetSize: false
    terminationGracePeriod: false
    tls: false
  updateStrategy: SmartUpdate
  upgradeOptions:
    apply: disabled
    schedule: 0 2 * * *
    setFCV: false
    versionServiceEndpoint: https://check.percona.com

Find physical backup to try restore,

apiVersion: psmdb.percona.com/v1
kind: PerconaServerMongoDBBackup
metadata:
  creationTimestamp: "2025-03-13T04:00:00Z"
  finalizers:
  - percona.com/delete-backup
  generateName: cron-staging-mongodb--20250313040000-
  generation: 1
  labels:
    app.kubernetes.io/instance: staging-mongodb-psmdb
    app.kubernetes.io/managed-by: percona-server-mongodb-operator
    app.kubernetes.io/name: percona-server-mongodb
    app.kubernetes.io/part-of: percona-server-mongodb
    percona.com/backup-ancestor: physical-backups
    percona.com/backup-type: cron
    percona.com/cluster: staging-mongodb-psmdb
  name: cron-staging-mongodb--20250313040000-tt8l7
  namespace: mongodb
  resourceVersion: "34845049"
  uid: e16c8bcc-1a37-4740-b65b-7630ab34a141
spec:
  clusterName: staging-mongodb-psmdb
  compressionLevel: 6
  compressionType: gzip
  storageName: s3-backups
  type: physical
status:
  completed: "2025-03-13T04:00:42Z"
  destination: [REDACTED]
  lastTransition: "2025-03-13T04:00:42Z"
  pbmName: "2025-03-13T04:00:21Z"
  pbmPod: staging-mongodb-psmdb-rs0-1.staging-mongodb-psmdb-rs0.mongodb.svc.cluster.local:27017
  pbmPods:
    cfg: staging-mongodb-psmdb-cfg-1.staging-mongodb-psmdb-cfg.mongodb.svc.cluster.local:27017
    rs0: staging-mongodb-psmdb-rs0-1.staging-mongodb-psmdb-rs0.mongodb.svc.cluster.local:27017
  replsetNames:
  - cfg
  - rs0
  s3: [REDACTED]
  start: "2025-03-13T04:00:21Z"
  state: ready
  storageName: s3-backups
  type: physical

Create restore,

cat <<EOF | kubectl apply -f-
apiVersion: psmdb.percona.com/v1
kind: PerconaServerMongoDBRestore
metadata:
  name: restore-20250313-115000
spec:
  clusterName: staging-mongodb-psmdb
  backupName: cron-staging-mongodb--20250313040000-tt8l7
EOF

Version:

Percona mongodb operator: percona/percona-server-mongodb-operator:1.19.1
Mongodb version: Check the cluster manifest above.
percona/pmm-client: Check the cluster manifest above.

Logs:

Restore result,

apiVersion: psmdb.percona.com/v1
kind: PerconaServerMongoDBRestore
metadata:
  creationTimestamp: "2025-03-13T10:51:01Z"
  generation: 1
  name: restore-20250313-115000
  namespace: mongodb
  resourceVersion: "34963404"
  uid: 6994a227-2b4c-4972-b844-360bb906a7ae
spec:
  backupName: cron-staging-mongodb--20250313040000-tt8l7
  clusterName: staging-mongodb-psmdb
status:
  error: 'prepare statefulsets for physical restore: StatefulSet.apps "staging-mongodb-psmdb-rs0"
    not found'
  pbmName: "2025-03-13T10:55:57.007633171Z"
  state: error

The operator logs: https://pastebin.com/4KEnWG7B

Expected Result:

That the restoration works.

Actual Result:

The restoration fails.

Additional Information:

The authentication information remained the same before, during, and after the backup and restoration.

Hi, there are several reasons why a restore could fail. Did this happen more than once? if you suspect a bug please report it at Jira - Percona JIRA

Hello,

I provided the operator’s logs, but the link seems to be broken. The operator is indeed encountering errors, but I don’t understand the exact cause of the issue. Here is a new link to the operator’s logs: gist:b6203abd5f4b5a52403a2443fcbcc7db · GitHub
restore start at line 343

Hi @sebastian.guesdon looks strange. We will fix this panic from the operator log, but it looks like something removed staging-mongodb-psmdb-rs0 StatefulSets when the operator created it.