Hi all!
I am new in Percona and Mongodb and have problems with HELM upgrade of Percona packages.
First i installed Percona HELM charts:
psmdb-operator-1.10.0.tgz
psmdb-db-1.10.0.tgz
Then i upgrade to 1.11 using HELM upgrade commands. Operator psmdb-operator-1.11.0.tgz upgrades OK, but when upgrading to psmdb-db-1.11.0.tgz i get an error:
Error: UPGRADE FAILED: rendered manifests contain a resource that already exists. Unable to continue with update: Secret “mongodb-10-psmdb-db-secrets” in namespace “mongodb” exists and cannot be imporinvalid ownership metadata; label validation error: missing key “app.kubernetes.io/managed-by”: must be set to “Helm”; annotation validation error: missing key “meta.helm.sh/release-name”: must be setalidation error: missing key “meta.helm.sh/release-namespace”: must be set to “mongodb”
I have been doing similar testing and got the same issue.
The problem seems to be with psmdb-db/templates/cluster-secret.yaml. The first line of code was changed from {{- if not .Values.users }} in version 1.10.0 to {{- if not (hasKey .Values.secrets "users") }} in version 1.11.0.
When deploying version 1.10.0, it seems that cluster-secret.yaml is not run, because the users in values.yaml are defined. However, for some reason, secret “my-db-psmdb-db-secrets” is still deployed (I assume by the operator). If I describe the generated secret the output is:
When upgrading to 1.11.0 (with helm upgrade) cluster-secret.yaml is run (because of that if statement in the first line of code). cluster-secret.yaml then also tries to generate some additional labels and annonations in the secret, which then causes a conflict and we get the error described above. If install version 1.11.0 (without any upgrades) and describe the deployed secreted the output is (note the labels and annonations):
The solution to successfully upgrade from 1.10.0 to 1.11.0 seems to be to manually patch “my-db-psmdb-db-secrets” secret and add the missing labels and annonations before upgrading.
Since I am also new to mongodb and kubernetes, I am wondering if for example there is a similar problem when running helm upgrade in production, should the kubernetes resources be fixed manually, or how is this usually done?