When PerconaXtraDBCluster replicas are scaled down from 3 to 1, PVCs from deleted replicas aren't deleted

Hello,

I scaled down (by changing the size key in the custom resource) a PXC Kubernetes cluster from 3 to 1 replicas but the corresponding PVCs weren’t deleted.

Here goes the custom resource after the size was changed to 1. The remaining pods were deleted as expected, just the PVC remained without being used by any pod.

apiVersion: pxc.percona.com/v1
kind: PerconaXtraDBCluster
metadata:
  annotations:
    argocd.argoproj.io/sync-wave: '-100'
    argocd.argoproj.io/tracking-id: 'my-blueprints-prod:pxc.percona.com/PerconaXtraDBCluster:giba/mydb'
  finalizers:
    - delete-pxc-pods-in-order
    - delete-ssl
    - delete-proxysql-pvc
    - delete-pxc-pvc
  labels:
    argocd.argoproj.io/instance: my-blueprints-prod
  name: mydb
  namespace: giba
spec:
  allowUnsafeConfigurations: true
  backup:
    backoffLimit: 10
    image: 'percona/percona-xtradb-cluster-operator:1.14.0-pxc8.0-backup-pxb8.0.35'
    pitr:
      enabled: true
      storageName: s3-pitr
      timeBetweenUploads: 300
    schedule:
      - keep: 1
        name: weekly-backup
        schedule: 37 5 * * 2
        storageName: s3-backup
      - keep: 1
        name: monthly-backup
        schedule: 7 5 21 * *
        storageName: s3-backup
      - keep: 1
        name: yearly-backup
        schedule: 7 5 7 9 *
        storageName: s3-backup
    storages:
      s3-backup:
        resources:
          limits:
            cpu: 2
            memory: 4Gi
          requests:
            cpu: 300m
            memory: 4Gi
        s3:
          bucket: >-
            infra-example-prod-backup/cs-01-qk5xmez5/pxc/giba-my-blueprints-prod-mydb-backup
          credentialsSecret: s3-backup-creds
          endpointUrl: 'http://x.x.x.x:9000'
          region: us-east-1
        type: s3
      s3-pitr:
        resources:
          limits:
            cpu: 2
            memory: 2Gi
          requests:
            cpu: 150m
            memory: 2Gi
        s3:
          bucket: >-
            infra-example-prod-backup/cs-01-qk5xmez5/pxc/giba-my-blueprints-prod-mydb-pitr
          credentialsSecret: s3-backup-creds
          endpointUrl: 'http://x.x.x.x:9000'
          region: us-east-1
        type: s3
  crVersion: 1.14.0
  haproxy:
    affinity:
      advanced:
        podAntiAffinity:
          preferredDuringSchedulingIgnoredDuringExecution:
            - podAffinityTerm:
                labelSelector:
                  matchLabels:
                    lwsa.cloud/pxc-haproxy: mydb-haproxy
                topologyKey: kubernetes.io/hostname
              weight: 100
    enabled: true
    gracePeriod: 30
    image: 'percona/percona-xtradb-cluster-operator:1.14.0-haproxy'
    labels:
      lwsa.cloud/pxc-haproxy: mydb-haproxy
    resources:
      limits:
        cpu: 1
        memory: 512Mi
      requests:
        cpu: 40m
        memory: 512Mi
    size: 1
  logcollector:
    enabled: true
    image: 'percona/percona-xtradb-cluster-operator:1.14.0-logcollector'
    resources:
      limits:
        cpu: 1
        memory: 512Mi
      requests:
        cpu: 40m
        memory: 512Mi
  pause: false
  pmm:
    enabled: false
  proxysql:
    enabled: false
  pxc:
    affinity:
      advanced:
        podAntiAffinity:
          preferredDuringSchedulingIgnoredDuringExecution:
            - podAffinityTerm:
                labelSelector:
                  matchLabels:
                    lwsa.cloud/pxc-instance: mydb
                topologyKey: kubernetes.io/hostname
              weight: 100
    annotations:
      k8up.io/backup: 'false'
    autoRecovery: true
    gracePeriod: 90
    image: 'percona/percona-xtradb-cluster:8.0.35-27.1'
    labels:
      lwsa.cloud/pxc-instance: mydb
    resources:
      limits:
        cpu: 2
        memory: 2Gi
      requests:
        cpu: 150m
        memory: 2Gi
    size: 1
    volumeSpec:
      persistentVolumeClaim:
        accessModes:
          - ReadWriteOnce
        resources:
          requests:
            storage: 50Gi
        storageClassName: local-path
  secretsName: mydb-secrets
  updateStrategy: SmartUpdate
  upgradeOptions:
    apply: Recommended
    schedule: 0 5 * * *

Hello @gmautner,

yeah, this is a known one. And this is sort of a deliberate choice. We can delete PVCs, but once you scale back up, you will need to recreate and do a full initial sync, which is costly.

We might add additional finalizer for that, something like delete-scaling-pvc to define this behavior.