How to increase the storage size of mongodb when using aws ebs volume

Hi friends,

I would like to know how to increase the storage size of mongodb. Hoping someone can guide me through what steps do i need to take to perform an inplace upgrade.

Currently i have a volumespec with storage: "100Gi". I would like to increase the storage size to “200Gi”.

...
    volumeSpec:
      pvc:
        storageClassName: mongodb
        resources:
          requests:
            storage: 200Gi
...

I have tried to simply alter the value and helm upgrade. However that has no effect.

 helm upgrade psmdb-db percona/psmdb-db --namespace mongodb -f psmdb-db.values.yaml

mongodb.stroageclass.yaml

kubectl describe sc mongodb 
Name:                  mongodb
IsDefaultClass:        No
Annotations:           <none>
Provisioner:           ebs.csi.aws.com
Parameters:            fsType=xfs,type=gp3
AllowVolumeExpansion:  True
MountOptions:          <none>
ReclaimPolicy:         Retain
VolumeBindingMode:     WaitForFirstConsumer
Events:                <none>

I managed to find a solution and upgrade mongodb uisng the following blog post. Percona Operator Volume Expansion Without Downtime

Throughout this process i was connected to mongodb, there was no downtime and the dummy database and collection i had created was still there after the expansion.

  1. Expand the size of PVC
kubectl patch pvc <pvc-kubectl -n mongodb patch pvc mongod-data-psmdb-db-rs0-0 -p '{ "spec": { "resources": { "requests": { "storage": "250Gi" }}}}'>  -n <pvc-namespace> -p '{ "spec": { "resources": { "requests": { "storage": "NEW STORAGE VALUE" }}}}'
  1. Check if it has expanded
kubectl describe pvc mongod-data-psmdb-db-rs0-0 -n mongodb
--
Normal   FileSystemResizeSuccessful  2m25s                 kubelet                           MountVolume.NodeExpandVolume succeeded for volume "pvc-0c374fd9-d443-452d-97be-9fe2a70686ae" ip-10-0-1-96.eu-west-2.compute.internal
  1. Check the mongodb pod has recognized the expansion
kubectl -n mongodb exec psmdb-db-rs0-0 -- lsblk
---
nvme2n1       259:4    0  250G  0 disk /data/db
  1. Delete the current stateful set
kubectl -n mongodb delete sts psmdb-db-rs0 --cascade=orphan
  1. Update the value and upgrade Helm
volumeSpec:
  pvc:
   storageClassName: mongodb
    resources:
      requests:
        storage: 250Gi
helm upgrade psmdb-db percona/psmdb-db --namespace mongodb -f psmdb-db.values.yaml
  1. Check value of new stateful set
kubectl describe sts psmdb-db-rs0 -n mongodb
---
Capacity:      250Gi

What are the side effects of NOT recreating the stateful set?

The side effect would be that once you scale up the cluster, the new node will be created with the size specified in the stateful set.

Please let me know if it makes sense.