Upgrade to 1.12.0 from 1.11.0

Description:

We use the terraform helm chart GitHub - percona/percona-helm-charts at pxc-operator-1.12.0 to upgrade psmdb-operator and psmdb-db.We are trying to upgrade prior to moving to EKS 1.25 as the PDBs wont work unless we upgrade to 1.13.0 We get an error in TF when the update runs

Error: unable to recognize "": no matches for kind "PerconaServerMongoDB" in version "psmdb.percona.com/v1-12-0"

  on .terraform/modules/xxxxx_mongodb/modules/xxxx_mongodb/main.tf line 180, in resource "helm_release" "this":
 180: resource "helm_release" "this" {

There is a new pod created(mongodb-operator-psmdb-operator-) and it keeps failing with the following error

{"level":"info","ts":1710188388.7305331,"logger":"controller.psmdb-controller","msg":"Starting Controller"}
{"level":"info","ts":1710188388.7307148,"logger":"controller.perconaservermongodbrestore-controller","msg":"Starting EventSource","source":"kind source: *v1.PerconaServerMongoDBRestore"}
{"level":"info","ts":1710188388.7307518,"logger":"controller.perconaservermongodbrestore-controller","msg":"Starting EventSource","source":"kind source: *v1.Pod"}
{"level":"info","ts":1710188388.730758,"logger":"controller.perconaservermongodbrestore-controller","msg":"Starting Controller"}
{"level":"info","ts":1710188388.7307503,"logger":"controller.perconaservermongodbbackup-controller","msg":"Starting EventSource","source":"kind source: *v1.PerconaServerMongoDBBackup"}
{"level":"info","ts":1710188388.7307775,"logger":"controller.perconaservermongodbbackup-controller","msg":"Starting EventSource","source":"kind source: *v1.Pod"}
{"level":"info","ts":1710188388.7307897,"logger":"controller.perconaservermongodbbackup-controller","msg":"Starting Controller"}
I0311 20:19:49.781447       1 request.go:665] Waited for 1.042210079s due to client-side throttling, not priority and fairness, request: GET:https://172.20.0.1:443/apis/certificates.k8s.io/v1?timeout=32s
{"level":"error","ts":1710188389.987827,"logger":"controller-runtime.source","msg":"if kind is a CRD, it should be installed before calling Start","kind":"PerconaServerMongoDB.psmdb.percona.com","error":"no matches for kind \"PerconaServerMongoDB\" in version \"psmdb.percona.com/v1-12-0\"","stacktrace":"sigs.k8s.io/controller-runtime/pkg/source.(*Kind).Start.func1.1\n\t/go/src/github.com/percona/percona-server-mongodb-operator/vendor/sigs.k8s.io/controller-runtime/pkg/source/source.go:137\nk8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext\n\t/go/src/github.com/percona/percona-server-mongodb-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:233\nk8s.io/apimachinery/pkg/util/wait.poll\n\t/go/src/github.com/percona/percona-server-mongodb-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:580\nk8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext\n\t/go/src/github.com/percona/percona-server-mongodb-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:545\nsigs.k8s.io/controller-runtime/pkg/source.(*Kind).Start.func1\n\t/go/src/github.com/percona/percona-server-mongodb-operator/vendor/sigs.k8s.io/controller-runtime/pkg/source/source.go:131"}
{"level":"info","ts":1710188390.0890567,"logger":"controller.perconaservermongodbrestore-controller","msg":"Starting workers","worker count":1}
{"level":"info","ts":1710188390.0908763,"logger":"controller.perconaservermongodbbackup-controller","msg":"Starting workers","worker count":1}
I0311 20:20:01.038964       1 request.go:665] Waited for 1.041887442s due to client-side throttling, not priority and fairness, request: GET:https://172.20.0.1:443/apis/node.k8s.io/v1beta1?timeout=32s
{"level":"error","ts":1710188401.2423782,"logger":"controller-runtime.source","msg":"if kind is a CRD, it should be installed before calling Start","kind":"PerconaServerMongoDB.psmdb.percona.com","error":"no matches for kind \"PerconaServerMongoDB\" in version \"psmdb.percona.com/v1-12-0\"","stacktrace":"sigs.k8s.io/controller-runtime/pkg/source.(*Kind).Start.func1.1\n\t/go/src/github.com/percona/percona-server-mongodb-operator/vendor/sigs.k8s.io/controller-runtime/pkg/source/source.go:137\nk8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext\n\t/go/src/github.com/percona/percona-server-mongodb-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:233\nk8s.io/apimachinery/pkg/util/wait.WaitForWithContext\n\t/go/src/github.com/percona/percona-server-mongodb-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660\nk8s.io/apimachinery/pkg/util/wait.poll\n\t/go/src/github.com/percona/percona-server-mongodb-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:594\nk8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext\n\t/go/src/github.com/percona/percona-server-mongodb-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:545\nsigs.k8s.io/controller-runtime/pkg/source.(*Kind).Start.func1\n\t/go/src/github.com/percona/percona-server-mongodb-operator/vendor/sigs.k8s.io/controller-runtime/pkg/source/source.go:131"}
I0311 20:20:11.040002       1 request.go:665] Waited for 1.042164346s due to client-side throttling, not priority and fairness, request: GET:https://172.20.0.1:443/apis/coordination.k8s.io/v1?timeout=32s
{"level":"error","ts":1710188411.24365,"logger":"controller-runtime.source","msg":"if kind is a CRD, it should be installed before calling Start","kind":"PerconaServerMongoDB.psmdb.percona.com","error":"no matches for kind \"PerconaServerMongoDB\" in version \"psmdb.percona.com/v1-12-0\"","stacktrace":"sigs.k8s.io/controller-runtime/pkg/source.(*Kind).Start.func1.1\n\t/go/src/github.com/percona/percona-server-mongodb-operator/vendor/sigs.k8s.io/controller-runtime/pkg/source/source.go:137\nk8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext\n\t/go/src/github.com/percona/percona-server-mongodb-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:233\nk8s.io/apimachinery/pkg/util/wait.WaitForWithContext\n\t/go/src/github.com/percona/percona-server-mongodb-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660\nk8s.io/apimachinery/pkg/util/wait.poll\n\t/go/src/github.com/percona/percona-server-mongodb-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:594\nk8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext\n\t/go/src/github.com/percona/percona-server-mongodb-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:545\nsigs.k8s.io/controller-runtime/pkg/source.(*Kind).Start.func1\n\t/go/src/github.com/percona/percona-server-mongodb-operator/vendor/sigs.k8s.io/controller-runtime/pkg/source/source.go:131"}
I0311 20:20:21.088941       1 request.go:665] Waited for 1.091144782s due to client-side throttling, not priority and fairness, request: GET:https://172.20.0.1:443/apis/vpcresources.k8s.aws/v1beta1?timeout=32s

This module has not been updated for 2 years.

Do we still need to update the CRDs after even though they are part of the Helm chart?

I can see that the only percona crd is still on the old version, when we try an use the old one we get an error

Apply failed with 1 conflict: conflict with "terraform-provider-helm_v2.4.1_x5" using apiextensions.k8s.io/v1: .spec.versions
Please review the fields above--they currently have other managers. Here
are the ways you can resolve this warning:
* If you intend to manage all of these fields, please re-run the apply
  command with the `--force-conflicts` flag.
* If you do not intend to manage all of the fields, please edit your
  manifest to remove references to the fields that should keep their
  current managers.
* You may co-own fields by updating your manifest to match the existing
  value; in this case, you'll become the manager if the other manager(s)
  stop managing the field (remove it from their configuration).
See https://kubernetes.io/docs/reference/using-api/server-side-apply/#conflicts

Steps to Reproduce:

[Step-by-step instructions on how to reproduce the issue, including any specific settings or configurations]

Version:

1.11.0

Logs:

Added above

Expected Result:

Safe upgrade to 1.12.0

Actual Result:

New Pod is created that fails

Additional Information:

[Include any additional information that could be helpful to diagnose the issue, such as browser or device information]

To confirm, rolled back to 1.11.0 in TF and this replaced the pod that was failing and now i see no errors.

Hi @shakusbakus !

Helm does install the CRD first time you install a chart, but it doesn’t upgrade it automatically on chart upgrade because it’s a tricky thing if you have multiple operators in different namespaces.
I’m not sure if terraform module does something in this regard, but just helm will not upgrade crd - so it needs to be done manually.
You can find a crd for version 1.12.0 here: percona-helm-charts/charts/psmdb-operator/crds/crd.yaml at psmdb-operator-1.12.1 · percona/percona-helm-charts · GitHub

Also I would propose to first try this operation in some test environment just to make sure you don’t break something.

1 Like