After installation of 1.13.0 - existing installation don't work anymore

Dear all

Looks like with the new “numbering” there is a problem with existing installations (on GKE):

no matches for kind “PerconaServerMongoDB” in version “psmdb.percona.com/v1-XYZ-0

Regards
John

1 Like

Hello @jamoser ,

what are the steps to reproduce? You upgraded from 1.12 to 1.13?

1 Like

Hi @Sergey_Pronin

No … I have an installation of 1.9.0 on the GKE cluster which gets shutdown overnight. I installed 1.13.0 which was ok. But when we tried to turn on 1.9.0 by applying the cr.yaml, then we get the above error.

imho - it’s a no-go. You can’t expect a live system to be migrated to 1.13.0 “on the fly”.

Regards
John

1 Like

Hi @jamoser, each release supports only last three versions. We don’t guarantee version 1.9.0 will work with 1.13.0. Also, you should upgrade to nearest minor release (1.10 in your case), not to the latest version.

Please see upgrade docs.

Any reason you stay on 1.9.0? What blocks you from upgrading?

1 Like

May be it’s a little bit difficult to understand :slight_smile:

We have an installation 1.9.0

and now I’ve installed 1.13.0

Until 1.12.0 the rbac/crd/whatever had all the versions.

Looks like with 1.13.0 deployed it overwrites them.

So after installing 1.13.0 all other version get killed.

And possibly you are not aware that we talk of PROD - there is no “hey it’s a lovely day and I am going to convert all the mongodb clusters to the new operator version”.

1 Like

@jamoser what is the goal of installing 1.13.0? Upgrade?

As Ege said - the intended and documented upgrade process is 1 version at a time.
So if you have 1.9.0, please install 1.10.0 first, upgrade both Operator and databases.
Then go with 1.11, and so on.

This way you will have a smooth upgrade experience.

You are correct about that 1.12.0 had all the versions. In 1.13.0 we changed this and you can see it in the release notes:

  • K8SPSMDB-715 Starting from now, the Opearator changed its API version to v1 instead of having a separate API version for each release. Three last API version are supported in addition to v1, which substantially reduces the size of Custom Resource Definition to prevent reaching the etcd limit

Please let me know if I misunderstood something here.

1 Like

CRD is a cluster wide object, all operators uses the same CRD no matter which version they’re running. If you have a operator running version 1.9.0, you shouldn’t apply the CRD from 1.13.0 before upgrading your operator to 1.10 at least.

Please correct me if I’m wrong but your running cluster shouldn’t be terminated, since CRD is there and it still has the version in it. v1.13.0 simply set v1.9.0 to served: false so Kubernetes API will throw not found if you try to use this API version. Although it’s not terminated, your old operator will fail to reconcile cluster because it’ll also get not found errors from API.

1 Like

Please correct me if I’m wrong but your running cluster shouldn’t be terminated, since CRD is there and it still has the version in it.

Again : GKE cluster X running 1.9.0, which gets “restarted” every night. The day before 1.13.0 installed with rbac, crd, etc. Today 1.9.0 does not restart when applying its cr.yaml

Bottom line : you can’t have parallel operation of a 1.x.0 and 1.13.0 cluster on a GKE cluster (even if the mongodb cluster/operators run on different namespaces).

1 Like

Addendum:

In GKE you can have

project

  • clusters (for ex. PROD, TEST, DEV)
    – namespaces (for ex. my-psmdb-cluster-09, my-psmdb-cluster-12, my-psmdb-cluster-13)

So if you deploy a CRD then it’s valid for the whole “project” and will affect any cluster. That is why it is not possible for ->us< to have operator 1.13 deployed - it will just overwrite the existing CRD for up to 1.12.

1 Like

Yes, CRDs are not namespaced and used in cluster scope. Thank you for explaining it clearly. We should put it in our documentation for operator upgrades.

1 Like