PXC PVCs are resized successfully but pod still see old storage size even after restart

Hi, thank you for providing/building a powerful database.

I use PXC (percona-xtradb-cluster:8.0.35-27.1) on Kubernetes (v1.29) with percona-xtradb-cluster-operator:1.14.0.

I wanted to increase the size of the pxc through automatic resizing. After I changed the cr.yaml and applied it, I can see the storage got resized and the sts got deleted and recreated (as expected).

However, if I logged into the pod (e.g. cluster1-pxc-0), I saw the disk size was still the same.
Even after deleting the pod, it still showed the same storage size as before.
(Even after I trickly adjusted some values in mysqld config in cr.yaml in order to make the operator restart each pod in sts, it still did not work.)

Could anyone please kindly give me some tips on how to solve this?

Debug 1: Disk Size Check Output

bash-5.1$ df -h
Filesystem                                 Size  Used Avail Use% Mounted on
overlay                                     75G  8.4G   63G  12% /
tmpfs                                       64M     0   64M   0% /dev
/dev/vda2                                   75G  8.4G   63G  12% /etc/hosts
shm                                         64M     0   64M   0% /dev/shm
/dev/disk/by-id/virtio-nrt-9fbced4c583847   30G  335M   30G   2% /var/lib/mysql

Debug 2: Warning in sts

Pods selected by this PodDisruptionBudget (selector: &LabelSelector{MatchLabels:map[string]string{app.kubernetes.io/component: pxc,app.kubernetes.io/instance: cluster1,app.kubernetes.io/managed-by: percona-xtradb-cluster-operator,app.kubernetes.io/name: percona-xtradb-cluster,app.kubernetes.io/part-of: percona-xtradb-cluster,},MatchExpressions:[]LabelSelectorRequirement{},}) were found to be unmanaged. As a result, the status of the PDB cannot be calculated correctly, which may result in undefined behavior. To account for these pods please set ".spec.minAvailable" field of the PDB to an integer value.

Value in cr.yaml

      persistentVolumeClaim:
        storageClassName: vultr-block-storage
        accessModes: [ "ReadWriteOnce" ]
        resources:
          requests:
            #old: storage: 30Gi
            storage: 40Gi

Value reflected in sts

volumeClaimTemplates:
    - apiVersion: v1
      kind: PersistentVolumeClaim
      metadata:
        creationTimestamp: null
        name: datadir
      spec:
        accessModes:
          - ReadWriteOnce
        resources:
          requests:
            storage: 40Gi
        storageClassName: vultr-block-storage
        volumeMode: Filesystem
      status:
        phase: Pending

Hi, I have found the solution to the above issue. It is not related to PXC I think.
In my case, I was able to see the correct disk size in the pods after I restarted the node (from the cloud provider’s instance management console to be specific).

So, If after your sts has been recreated and the PVC has been resized successfully, and if there is no error with your cluster or in the operator pod, you can just restart the nodes one by one. After the node has been restarted, the new PVC size will be reflected.

Hey @chhatra ,

which cloud provider is that? I can’t reproduce it on AWS EKS or GKE - I don’t need to restart the node, the disk just resizes.