Kubectl apply reports spurious errors with conflicts using the Percona yaml

Description:

kubectrl apply reports errors with conflicts using the Percona yaml and only partially succeeds.

Steps to Reproduce:

I am following the instructions here:

I created the namespace no problem

gglazer@Glenns-MacBook-Pro percona % kubectl create namespace percona-postgres-operator
namespace/percona-postgres-operator created

I then run the suggested apply command and it fails with conflicts:

gglazer@Glenns-MacBook-Pro percona % kubectl apply --server-side -f https://raw.githubusercontent.com/percona/percona-postgresql-operator/v2.8.2/deploy/bundle.yaml -n percona-postgres-operator
customresourcedefinition.apiextensions.k8s.io/crunchybridgeclusters.postgres-operator.crunchydata.com serverside-applied
customresourcedefinition.apiextensions.k8s.io/perconapgbackups.pgv2.percona.com serverside-applied
customresourcedefinition.apiextensions.k8s.io/perconapgclusters.pgv2.percona.com serverside-applied
customresourcedefinition.apiextensions.k8s.io/perconapgrestores.pgv2.percona.com serverside-applied
customresourcedefinition.apiextensions.k8s.io/perconapgupgrades.pgv2.percona.com serverside-applied
serviceaccount/percona-postgresql-operator serverside-applied
role.rbac.authorization.k8s.io/percona-postgresql-operator serverside-applied
rolebinding.rbac.authorization.k8s.io/percona-postgresql-operator serverside-applied
deployment.apps/percona-postgresql-operator serverside-applied
Apply failed with 2 conflicts: conflicts with “helm” using apiextensions.k8s.io/v1:

.metadata.labels.app.kubernetes.io/version

.spec.versions
Please review the fields above–they currently have other managers. Here
are the ways you can resolve this warning:

If you intend to manage all of these fields, please re-run the apply
command with the --force-conflicts flag.

If you do not intend to manage all of the fields, please edit your
manifest to remove references to the fields that should keep their
current managers.

You may co-own fields by updating your manifest to match the existing
value; in this case, you’ll become the manager if the other manager(s)
stop managing the field (remove it from their configuration).
See 

Apply failed with 2 conflicts: conflicts with “helm” using apiextensions.k8s.io/v1:

.metadata.labels.app.kubernetes.io/version

.spec.versions
Please review the fields above–they currently have other managers. Here
are the ways you can resolve this warning:

If you intend to manage all of these fields, please re-run the apply
command with the --force-conflicts flag.

If you do not intend to manage all of the fields, please edit your
manifest to remove references to the fields that should keep their
current managers.

You may co-own fields by updating your manifest to match the existing
value; in this case, you’ll become the manager if the other manager(s)
stop managing the field (remove it from their configuration).
See 

Apply failed with 2 conflicts: conflicts with “helm” using apiextensions.k8s.io/v1:

.metadata.labels.app.kubernetes.io/version

.spec.versions
Please review the fields above–they currently have other managers. Here
are the ways you can resolve this warning:

If you intend to manage all of these fields, please re-run the apply
command with the --force-conflicts flag.

If you do not intend to manage all of the fields, please edit your
manifest to remove references to the fields that should keep their
current managers.

You may co-own fields by updating your manifest to match the existing
value; in this case, you’ll become the manager if the other manager(s)
stop managing the field (remove it from their configuration).
See https://kubernetes.io/docs/reference/using-api/server-side-apply/#conflicts

Version:

2.8.2

Logs:

n/a

Expected Result:

I expect that when a deploy works, it doesn’t spew errors. Warnings are fine, but the language above made me think the deploy failed until I checked it.

Actual Result:

Deploy partially succeeded despite the errors.

NAME                                               READY   STATUS    RESTARTS   AGE
pod/percona-postgresql-operator-6b887b756d-8gnwf   1/1     Running   0          33m

NAME                                          READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/percona-postgresql-operator   1/1     1            1           33m

NAME                                                     DESIRED   CURRENT   READY   AGE
replicaset.apps/percona-postgresql-operator-6b887b756d   1         1         1       
33m

gglazer@Glenns-MacBook-Pro percona % kubectl get pg -n percona-postgres-operator
No resources found in percona-postgres-operator namespace.

Additional Information:

This is an Azure tenant.

Update: this was caused by global artifacts from a crunchy pgo helm deploy that we had been testing in a different namespace. A conclusion is that it doesn’t seem possible (for, for example, side by side testing) to have both types of operator in the same tenant as they conflict on global variables above the namespaces.

2 Likes