Error upgrading to 1.19: failed to update cluster status

Description:

When upgrading to version 1.19 the operator keep spamming an error message, however all seems to work nicely.

Somehow it seems to get stuck updating the status of the Kubernetes object:

[status.replsets.cfg.members: Invalid value: \"object\": replsets.cfg.members in body must be of type array: \"object\"

Steps to Reproduce:

Just upgraded the helm chart to 1.19.

Version:

Helm chart for Percona Operator for MongoDB, version 1.18 to 1.19

Logs:

$ k -n psmdb logs -f psmdb-operator-84dcc78d6f-8mx9k

2025-01-30T15:55:16.135Z	ERROR	failed to update cluster status	{"controller": "psmdb-controller", "controllerGroup": "psmdb.percona.com", "controllerKind": "PerconaServerMongoDB", "PerconaServerMongoDB": {"name":"test-mongodb","namespace":"test-dev"}, "namespace": "test-dev", "name": "test-mongodb", "reconcileID": "eba249eb-2232-40f5-8d37-b4ede872a211", "replset": "shard01", "error": "write status: PerconaServerMongoDB.psmdb.percona.com \"test-mongodb\" is invalid: [status.replsets.cfg.members: Invalid value: \"object\": replsets.cfg.members in body must be of type array: \"object\", status.replsets.shard01.members: Invalid value: \"object\": replsets.shard01.members in body must be of type array: \"object\"]", "errorVerbose": "PerconaServerMongoDB.psmdb.percona.com \"test-mongodb\" is invalid: [status.replsets.cfg.members: Invalid value: \"object\": replsets.cfg.members in body must be of type array: \"object\", status.replsets.shard01.members: Invalid value: \"object\": replsets.shard01.members in body must be of type array: \"object\"]\nwrite status\ngithub.com/percona/percona-server-mongodb-operator/pkg/controller/perconaservermongodb.(*ReconcilePerconaServerMongoDB).writeStatus\n\t/go/src/github.com/percona/percona-server-mongodb-operator/pkg/controller/perconaservermongodb/status.go:269\ngithub.com/percona/percona-server-mongodb-operator/pkg/controller/perconaservermongodb.(*ReconcilePerconaServerMongoDB).updateStatus\n\t/go/src/github.com/percona/percona-server-mongodb-operator/pkg/controller/perconaservermongodb/status.go:238\ngithub.com/percona/percona-server-mongodb-operator/pkg/controller/perconaservermongodb.(*ReconcilePerconaServerMongoDB).Reconcile.func1\n\t/go/src/github.com/percona/percona-server-mongodb-operator/pkg/controller/perconaservermongodb/psmdb_controller.go:266\ngithub.com/percona/percona-server-mongodb-operator/pkg/controller/perconaservermongodb.(*ReconcilePerconaServerMongoDB).Reconcile\n\t/go/src/github.com/percona/percona-server-mongodb-operator/pkg/controller/perconaservermongodb/psmdb_controller.go:470\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Reconcile\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.3/pkg/internal/controller/controller.go:116\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.3/pkg/internal/controller/controller.go:303\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.3/pkg/internal/controller/controller.go:263\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.3/pkg/internal/controller/controller.go:224\nruntime.goexit\n\t/usr/local/go/src/runtime/asm_amd64.s:1700"}
github.com/percona/percona-server-mongodb-operator/pkg/controller/perconaservermongodb.(*ReconcilePerconaServerMongoDB).Reconcile.func1
	/go/src/github.com/percona/percona-server-mongodb-operator/pkg/controller/perconaservermongodb/psmdb_controller.go:268
github.com/percona/percona-server-mongodb-operator/pkg/controller/perconaservermongodb.(*ReconcilePerconaServerMongoDB).Reconcile
	/go/src/github.com/percona/percona-server-mongodb-operator/pkg/controller/perconaservermongodb/psmdb_controller.go:470
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Reconcile
	/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.3/pkg/internal/controller/controller.go:116
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler
	/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.3/pkg/internal/controller/controller.go:303
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem
	/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.3/pkg/internal/controller/controller.go:263
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2
	/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.19.3/pkg/internal/controller/controller.go:224

Additional Information:

Despite the error message, the PSMD object is in ready status and works fine.

Hi @hipos,

This shouldn’t happen. For checking this properly, could you please share the steps you took to upgrade the operator and cluster?

@hipos I can’t reproduce this issue. :frowning: did you use our office way Upgrade MongoDB and the Operator - Percona Operator for MongoDB to update your cluster?

Well I just upgraded the operator helm’s version as the documentation says, then the crVersion of the psmdb custom resource object. Jumped from 1.18.0 to 1.19.0 in both.

MongoDB itself image version is percona-server-mongodb:6.0.19-16

Status is shown as ‘ready’, but the error message from the operator keep spamming:

"error": "write status: PerconaServerMongoDB.psmdb.percona.com \"test-mongodb\" is invalid: [status.replsets.cfg.members: Invalid value: \"object\": replsets.cfg.members in body must be of type array: \"object\", status.replsets.shard01.members: Invalid value: \"object\": replsets.shard01.members in body must be of type array: \"object\"]"

When I start up the operator it shows in the logs this:

{"level":"info","ts":1738328924.249747,"logger":"KubeAPIWarningLogger","msg":"unknown field \"status.replsets.cfg.members.test-mongodb-cfg-0\""}
{"level":"info","ts":1738328924.2497528,"logger":"KubeAPIWarningLogger","msg":"unknown field \"status.replsets.cfg.members.test-mongodb-cfg-1\""}
{"level":"info","ts":1738328924.249755,"logger":"KubeAPIWarningLogger","msg":"unknown field \"status.replsets.cfg.members.test-mongodb-cfg-2\""}
{"level":"info","ts":1738328924.2497606,"logger":"KubeAPIWarningLogger","msg":"unknown field \"status.replsets.shard01.members.test-mongodb-shard01-0\""}
{"level":"info","ts":1738328924.2497625,"logger":"KubeAPIWarningLogger","msg":"unknown field \"status.replsets.shard01.members.test-mongodb-shard01-1\""}
{"level":"info","ts":1738328924.2497704,"logger":"KubeAPIWarningLogger","msg":"unknown field \"status.replsets.shard01.members.test-mongodb-shard01-2\""}

Did you update CRD and rbac?

After upgrading the CRD, all worked fine.

I assumed the Helm chart would also update the CRD, as it does with the RBAC.

I was using the 1.12 version, working so far until 1.18.

Hi @hipos, helm can’t update CRDs (it is helm limitation). You need to update CRD and RBAC each time. Previously it could work because operator somehow worked with old CRDs but in 1.18 some critical changes were added and you see a problem now.

1 Like