Best way to clear up a Helm install


Having worked round all my outstanding issues with configuration and storage options, I needed to clear my mongodb install on a small kubernetes/kubespray test cluster so that I could reallocate storage to a Clickhouse instance to test it’s backup/restore capability.

Steps to Reproduce:

So, this is a little cluster from Hetzner cloud instance, built on k8s 1.25.6 and kubespray 2.20.0, with Mongodb installed via helm, using the latest version of the operator, 1.15.0

I planned to just remove using helm uninstall, and hoped to leave the operator alone and just get rid of the pods etc. Silly me

Before doing this, a kubespray update had been carried out to v 2.21.0, as some of the underlying software needs to be updated due to ingress-nginx security issues

It seemed as if helm uninstall, as opposed to helm delete --purge, only cleared away the charts, and everything just kept on running.

After deleting both helm charts, I then removed the namespace as a last resort, but this hasn’t worked either. The pods have been deleted but the namespace is in the terminating state.

A deeper look shows that the clear up seems to be stuck in a backup related finalizer.

I think I can clear the finalizer issue using kubectl proxy, but is there a better way of doing this ?

Kind regards,


root@kube-1:~# kubectl get ns mongodb -o yaml

apiVersion: v1

kind: Namespace


creationTimestamp: “2023-04-17T17:19:43Z”

deletionTimestamp: “2023-11-06T15:18:57Z”

labels: mongodb

name: mongodb

resourceVersion: “55151666”

uid: bf727d7d-2fed-4c30-9b8c-bc5031194628



  • kubernetes



  • lastTransitionTime: “2023-11-06T15:19:03Z”

message: All resources successfully discovered

reason: ResourcesDiscovered

status: “False”

type: NamespaceDeletionDiscoveryFailure

  • lastTransitionTime: “2023-11-06T15:19:03Z”

message: All legacy kube types successfully parsed

reason: ParsedGroupVersions

status: “False”

type: NamespaceDeletionGroupVersionParsingFailure

  • lastTransitionTime: “2023-11-06T15:19:03Z”

message: All content successfully deleted, may be waiting on finalization

reason: ContentDeleted

status: “False”

type: NamespaceDeletionContentFailure

  • lastTransitionTime: “2023-11-06T15:19:03Z”

message: 'Some resources are remaining:

has 43 resource instances, has 1 resource


reason: SomeResourcesRemain

status: “True”

type: NamespaceContentRemaining

  • lastTransitionTime: “2023-11-06T15:19:03Z”

message: 'Some content in the namespace has finalizers remaining: delete-backup

in 43 resource instances, delete-psmdb-pods-in-order in 1 resource instances’

reason: SomeFinalizersRemain

status: “True”

type: NamespaceFinalizersRemaining

phase: Terminating

1 Like

Hi @Michael_Shield
Thank you for your details on this question. I was able to replicate it. From what we can understand, you still need the operator CRD to remain in the cluster.

A better way to do it would be:

  • If backups are remaining, clean up if this is not needed

kubectl delete psmdb-backup -n mongodb --all

  • Uninstall the database charts

helm uninstall [db-charts]

  • Deleting the PVC/ if not needed

kubectl delete PVC [specific PVC names in the cluster]

  • check for any PV after deletion

kubectl get pv

Let us know if this makes sense to you or if you have more questions about it.

Thank you for your contribution, @Michael_Shield !

1 Like