GCS Cron Backup failed due to CRD issue

GCS cron backup stopped , showing the error "error: no kind “PerconaXtraDBClusterBackup” is registered for version “pxc.percona.com/v1” in scheme “pkg/scheme/scheme.go:28” "

Percona version - V/1-11-0
GKE version - 1.21.11

and also facing with latest configuration updates in the Percona-XtraDB-Cluster-operator.
Kindly help with the solution for backup failure and clarification on latest releases.

1 Like

Please provide your CR to reproduce this issue. And seems you have issue with kubectl try to update it. Do you have log from failed backup?

and also facing with latest configuration updates in the Percona-XtraDB-Cluster-operator.

Could you please provide more information about it?

1 Like
Name:         perconaxtradbclusterbackups.pxc.percona.com
Namespace:
Labels:       <none>
Annotations:  <none>
API Version:  apiextensions.k8s.io/v1
Kind:         CustomResourceDefinition
Metadata:
  Creation Timestamp:  2022-01-10T09:26:56Z
  Generation:          1
  Managed Fields:
    API Version:  apiextensions.k8s.io/v1
    Fields Type:  FieldsV1
    fieldsV1:
      f:status:
        f:acceptedNames:
          f:kind:
          f:listKind:
          f:plural:
          f:shortNames:
          f:singular:
        f:conditions:
    Manager:      kube-apiserver
    Operation:    Update
    Time:         2022-01-10T09:26:56Z
    API Version:  apiextensions.k8s.io/v1
    Fields Type:  FieldsV1
    fieldsV1:
      f:metadata:
        f:annotations:
          .:
          f:kubectl.kubernetes.io/last-applied-configuration:
      f:spec:
        f:conversion:
          .:
          f:strategy:
        f:group:
        f:names:
          f:kind:
          f:listKind:
          f:plural:
          f:shortNames:
          f:singular:
        f:scope:
        f:versions:
      f:status:
        f:storedVersions:
    Manager:         kubectl-client-side-apply
    Operation:       Update
    Time:            2022-01-10T09:26:56Z
  Resource Version:  172103509
  UID:               3ea2a61f-bdef-46e1-8221-f1b79601e2f0
Spec:
  Conversion:
    Strategy:  None
  Group:       pxc.percona.com
  Names:
    Kind:       PerconaXtraDBClusterBackup
    List Kind:  PerconaXtraDBClusterBackupList
    Plural:     perconaxtradbclusterbackups
    Short Names:
      pxc-backup
      pxc-backups
    Singular:  perconaxtradbclusterbackup
  Scope:       Namespaced
  Versions:
    Additional Printer Columns:
      Description:  Cluster name
      Json Path:    .spec.pxcCluster
      Name:         Cluster
      Type:         string
      Description:  Storage name from pxc spec
      Json Path:    .status.storageName
      Name:         Storage
      Type:         string
      Description:  Backup destination
      Json Path:    .status.destination
      Name:         Destination
      Type:         string
      Description:  Job status
      Json Path:    .status.state
      Name:         Status
      Type:         string
      Description:  Completed time
      Json Path:    .status.completed
      Name:         Completed
      Type:         date
      Json Path:    .metadata.creationTimestamp
      Name:         Age
      Type:         date
    Name:           v1
    Schema:
      openAPIV3Schema:
        Properties:
          Spec:
            Type:                                          object
            X - Kubernetes - Preserve - Unknown - Fields:  true
          Status:
            Type:                                          object
            X - Kubernetes - Preserve - Unknown - Fields:  true
        Type:                                              object
    Served:                                                true
    Storage:                                               true
    Subresources:
      Status:
Status:
  Accepted Names:
    Kind:       PerconaXtraDBClusterBackup
    List Kind:  PerconaXtraDBClusterBackupList
    Plural:     perconaxtradbclusterbackups
    Short Names:
      pxc-backup
      pxc-backups
    Singular:  perconaxtradbclusterbackup
  Conditions:
    Last Transition Time:  2022-01-10T09:26:56Z
    Message:               no conflicts found
    Reason:                NoConflicts
    Status:                True
    Type:                  NamesAccepted
    Last Transition Time:  2022-01-10T09:26:56Z
    Message:               the initial names have been accepted
    Reason:                InitialNamesAccepted
    Status:                True
    Type:                  Established
  Stored Versions:
    v1
Events:  <none>

I tried with other kubernetes version also facing same crd issue for backup.

This with the V/1-12-0 latest available in the git

1 Like

If you want to test operator from main branch (please use it only for tests) you need to use the following command
kubectl apply --server-side -f bundle.yaml

2 Likes

Please provide the output of the following command:
kubectl get pxc-backups

1 Like

kubectl get pxc-backup

1 Like

Can you please share me git repo link for last stable version that can be used.

1 Like

So, as I can see this backup was created 29 days ago. Also you can’t get the log of backup object. If you want to get more information about this object you need to use ❯ kubectl describe ps-backups/backup1. If you want to check the logs, you need to find pod of this backup. Using kubectl get pods you will find pods which were created by backup job and have ‘failed’ status. And from this pods you can get the log e.g. kubectl logs xb-backup1-s3-us-west-snnvg.

1 Like

The last stable version is v1.11.0 .

2 Likes

In my case, pod itself is not getting up. Even recently, After applying the backup.yaml file for pvc backup. Backup is not working and also the pod is not showing up, the above CRD error is thrown.

What can be done to troubleshoot it. Kindly help with the solution

1 Like

I need to have the log from operator pod. Also provide the output of the following command kubectl get pxc <cluster-name> -o yaml. And I need to have your backup.yaml file.

1 Like

backup.yaml file

apiVersion: pxc.percona.com/v1
kind: PerconaXtraDBClusterBackup
metadata:
  finalizers:
    # - delete-s3-backup
  name: backup-full-gcs-1
spec:
  pxcCluster: dev-mysql
  storageName: s3-us-west 
  # storageName: fs-pvc

Operator logs

{"level":"info","ts":1655891967.0027058,"logger":"setup","msg":"Runs on","platform":"kubernetes","version":"v1.24.1"}
{"level":"info","ts":1655891967.0028338,"logger":"setup","msg":"Git commit: 287c67c2f9ae879ea3e79885a096060a9a726d84 Git branch: main Build time: 2022-06-21T12:31:15Z"}
{"level":"info","ts":1655891967.002849,"logger":"setup","msg":"Go Version: go1.18.3"}
{"level":"info","ts":1655891967.002854,"logger":"setup","msg":"Go OS/Arch: linux/amd64"}
{"level":"info","ts":1655891967.757995,"logger":"controller-runtime.metrics","msg":"Metrics server is starting to listen","addr":":8080"}
{"level":"info","ts":1655891967.758424,"logger":"setup","msg":"Registering Components."}
{"level":"info","ts":1655891972.9660006,"logger":"controller-runtime.webhook","msg":"Registering webhook","path":"/validate-percona-xtradbcluster"}
{"level":"info","ts":1655891972.9660785,"logger":"setup","msg":"Starting the Cmd."}
{"level":"info","ts":1655891972.9662318,"logger":"controller-runtime.webhook.webhooks","msg":"Starting webhook server"}
{"level":"info","ts":1655891972.9663756,"msg":"Starting server","path":"/metrics","kind":"metrics","addr":"[::]:8080"}
{"level":"info","ts":1655891972.9664674,"msg":"Starting server","kind":"health probe","addr":"[::]:8081"}
{"level":"info","ts":1655891972.966576,"logger":"controller-runtime.certwatcher","msg":"Updated current TLS certificate"}
{"level":"info","ts":1655891972.9666705,"logger":"controller-runtime.webhook","msg":"Serving webhook server","host":"","port":9443}
{"level":"info","ts":1655891973.054017,"logger":"controller-runtime.certwatcher","msg":"Starting certificate watcher"}
{"level":"info","ts":1655891973.0542722,"msg":"attempting to acquire leader lease default/08db1feb.percona.com...\n"}
{"level":"error","ts":1655891973.085533,"msg":"error initially creating leader election record: leases.coordination.k8s.io is forbidden: User \"system:serviceaccount:default:percona-xtradb-cluster-operator\" cannot create resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"default\"\n","stacktrace":"k8s.io/klog/v2.(*loggingT).output\n\t/go/src/github.com/percona/percona-xtradb-cluster-operator/vendor/k8s.io/klog/v2/klog.go:931\nk8s.io/klog/v2.(*loggingT).printf\n\t/go/src/github.com/percona/percona-xtradb-cluster-operator/vendor/k8s.io/klog/v2/klog.go:753\nk8s.io/klog/v2.Errorf\n\t/go/src/github.com/percona/percona-xtradb-cluster-operator/vendor/k8s.io/klog/v2/klog.go:1486\nk8s.io/client-go/tools/leaderelection.(*LeaderElector).tryAcquireOrRenew\n\t/go/src/github.com/percona/percona-xtradb-cluster-operator/vendor/k8s.io/client-go/tools/leaderelection/leaderelection.go:334\nk8s.io/client-go/tools/leaderelection.(*LeaderElector).acquire.func1\n\t/go/src/github.com/percona/percona-xtradb-cluster-operator/vendor/k8s.io/client-go/tools/leaderelection/leaderelection.go:250\nk8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1\n\t/go/src/github.com/percona/percona-xtradb-cluster-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:155\nk8s.io/apimachinery/pkg/util/wait.BackoffUntil\n\t/go/src/github.com/percona/percona-xtradb-cluster-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:156\nk8s.io/apimachinery/pkg/util/wait.JitterUntil\n\t/go/src/github.com/percona/percona-xtradb-cluster-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133\nk8s.io/client-go/tools/leaderelection.(*LeaderElector).acquire\n\t/go/src/github.com/percona/percona-xtradb-cluster-operator/vendor/k8s.io/client-go/tools/leaderelection/leaderelection.go:249\nk8s.io/client-go/tools/leaderelection.(*LeaderElector).Run\n\t/go/src/github.com/percona/percona-xtradb-cluster-operator/vendor/k8s.io/client-go/tools/leaderelection/leaderelection.go:206\nsigs.k8s.io/controller-runtime/pkg/manager.(*controllerManager).startLeaderElection.func3\n\t/go/src/github.com/percona/percona-xtradb-cluster-operator/vendor/sigs.k8s.io/controller-runtime/pkg/manager/internal.go:642"}
{"level":"error","ts":1655891977.3430548,"msg":"error retrieving resource lock default/08db1feb.percona.com: leases.coordination.k8s.io \"08db1feb.percona.com\" is forbidden: User \"system:serviceaccount:default:percona-xtradb-cluster-operator\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"default\"\n","stacktrace":"k8s.io/klog/v2.(*loggingT).output\n\t/go/src/github.com/percona/percona-xtradb-cluster-operator/vendor/k8s.io/klog/v2/klog.go:931\nk8s.io/klog/v2.(*loggingT).printf\n\t/go/src/github.com/percona/percona-xtradb-cluster-operator/vendor/k8s.io/klog/v2/klog.go:753\nk8s.io/klog/v2.Errorf\n\t/go/src/github.com/percona/percona-xtradb-cluster-operator/vendor/k8s.io/klog/v2/klog.go:1486\nk8s.io/client-go/tools/leaderelection.(*LeaderElector).tryAcquireOrRenew\n\t/go/src/github.com/percona/percona-xtradb-cluster-operator/vendor/k8s.io/client-go/tools/leaderelection/leaderelection.go:330\nk8s.io/client-go/tools/leaderelection.(*LeaderElector).acquire.func1\n\t/go/src/github.com/percona/percona-xtradb-cluster-operator/vendor/k8s.io/client-go/tools/leaderelection/leaderelection.go:250\nk8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1\n\t/go/src/github.com/percona/percona-xtradb-cluster-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:155\nk8s.io/apimachinery/pkg/util/wait.BackoffUntil\n\t/go/src/github.com/percona/percona-xtradb-cluster-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:156\nk8s.io/apimachinery/pkg/util/wait.JitterUntil\n\t/go/src/github.com/percona/percona-xtradb-cluster-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133\nk8s.io/client-go/tools/leaderelection.(*LeaderElector).acquire\n\t/go/src/github.com/percona/percona-xtradb-cluster-operator/vendor/k8s.io/client-go/tools/leaderelection/leaderelection.go:249\nk8s.io/client-go/tools/leaderelection.(*LeaderElector).Run\n\t/go/src/github.com/percona/percona-xtradb-cluster-operator/vendor/k8s.io/client-go/tools/leaderelection/leaderelection.go:206\nsigs.k8s.io/controller-runtime/pkg/manager.(*controllerManager).startLeaderElection.func3\n\t/go/src/github.com/percona/percona-xtradb-cluster-operator/vendor/sigs.k8s.io/controller-runtime/pkg/manager/internal.go:642"}
{"level":"error","ts":1655891981.467936,"msg":"error retrieving resource lock default/08db1feb.percona.com: leases.coordination.k8s.io \"08db1feb.percona.com\" is forbidden: User \"system:serviceaccount:default:percona-xtradb-cluster-operator\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"default\"\n","stacktrace":"k8s.io/klog/v2.(*loggingT).output\n\t/go/src/github.com/percona/percona-xtradb-cluster-operator/vendor/k8s.io/klog/v2/klog.go:931\nk8s.io/klog/v2.(*loggingT).printf\n\t/go/src/github.com/percona/percona-xtradb-cluster-operator/vendor/k8s.io/klog/v2/klog.go:753\nk8s.io/klog/v2.Errorf\n\t/go/src/github.com/percona/percona-xtradb-cluster-operator/vendor/k8s.io/klog/v2/klog.go:1486\nk8s.io/client-go/tools/leaderelection.(*LeaderElector).tryAcquireOrRenew\n\t/go/src/github.com/percona/percona-xtradb-cluster-operator/vendor/k8s.io/client-go/tools/leaderelection/leaderelection.go:330\nk8s.io/client-go/tools/leaderelection.(*LeaderElector).acquire.func1\n\t/go/src/github.com/percona/percona-xtradb-cluster-operator/vendor/k8s.io/client-go/tools/leaderelection/leaderelection.go:250\nk8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1\n\t/go/src/github.com/percona/percona-xtradb-cluster-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:155\nk8s.io/apimachinery/pkg/util/wait.BackoffUntil\n\t/go/src/github.com/percona/percona-xtradb-cluster-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:156\nk8s.io/apimachinery/pkg/util/wait.JitterUntil\n\t/go/src/github.com/percona/percona-xtradb-cluster-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133\nk8s.io/client-go/tools/leaderelection.(*LeaderElector).acquire\n\t/go/src/github.com/percona/percona-xtradb-cluster-operator/vendor/k8s.io/client-go/tools/leaderelection/leaderelection.go:249\nk8s.io/client-go/tools/leaderelection.(*LeaderElector).Run\n\t/go/src/github.com/percona/percona-xtradb-cluster-operator/vendor/k8s.io/client-go/tools/leaderelection/leaderelection.go:206\nsigs.k8s.io/controller-runtime/pkg/manager.(*controllerManager).startLeaderElection.func3\n\t/go/src/github.com/percona/percona-xtradb-cluster-operator/vendor/sigs.k8s.io/controller-runtime/pkg/manager/internal.go:642"}
1 Like

Ok, as I can see you are in wrong namespace and that is why you can’t get the object. And it can be your root of the issue. When you create backup you need to create it in the same ns as your cluster is.

Using this command you can get the name ns where your cluster is:
kubectl get pxc --all-namespaces

1 Like

I just got the logs of the default namespace in which deployment is similar to the namespace where my cluster is running.
I am getting the similar error in both the namespace

1 Like

Logs of the operator

{"level":"error","ts":1655908048.7443595,"msg":"error retrieving resource lock dev-mysql-db/08db1feb.percona.com: leases.coordination.k8s.io \"08db1feb.percona.com\" is forbidden: User \"system:serviceaccount:dev-mysql-db:percona-xtradb-cluster-operator\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"dev-mysql-db\"\n","stacktrace":"k8s.io/klog/v2.(*loggingT).output\n\t/go/src/github.com/percona/percona-xtradb-cluster-operator/vendor/k8s.io/klog/v2/klog.go:931\nk8s.io/klog/v2.(*loggingT).printf\n\t/go/src/github.com/percona/percona-xtradb-cluster-operator/vendor/k8s.io/klog/v2/klog.go:753\nk8s.io/klog/v2.Errorf\n\t/go/src/github.com/percona/percona-xtradb-cluster-operator/vendor/k8s.io/klog/v2/klog.go:1486\nk8s.io/client-go/tools/leaderelection.(*LeaderElector).tryAcquireOrRenew\n\t/go/src/github.com/percona/percona-xtradb-cluster-operator/vendor/k8s.io/client-go/tools/leaderelection/leaderelection.go:330\nk8s.io/client-go/tools/leaderelection.(*LeaderElector).acquire.func1\n\t/go/src/github.com/percona/percona-xtradb-cluster-operator/vendor/k8s.io/client-go/tools/leaderelection/leaderelection.go:250\nk8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1\n\t/go/src/github.com/percona/percona-xtradb-cluster-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:155\nk8s.io/apimachinery/pkg/util/wait.BackoffUntil\n\t/go/src/github.com/percona/percona-xtradb-cluster-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:156\nk8s.io/apimachinery/pkg/util/wait.JitterUntil\n\t/go/src/github.com/percona/percona-xtradb-cluster-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133\nk8s.io/client-go/tools/leaderelection.(*LeaderElector).acquire\n\t/go/src/github.com/percona/percona-xtradb-cluster-operator/vendor/k8s.io/client-go/tools/leaderelection/leaderelection.go:249\nk8s.io/client-go/tools/leaderelection.(*LeaderElector).Run\n\t/go/src/github.com/percona/percona-xtradb-cluster-operator/vendor/k8s.io/client-go/tools/leaderelection/leaderelection.go:206\nsigs.k8s.io/controller-runtime/pkg/manager.(*controllerManager).startLeaderElection.func3\n\t/go/src/github.com/percona/percona-xtradb-cluster-operator/vendor/sigs.k8s.io/controller-runtime/pkg/manager/internal.go:642"}
{"level":"error","ts":1655908051.2278821,"msg":"error retrieving resource lock dev-mysql-db/08db1feb.percona.com: leases.coordination.k8s.io \"08db1feb.percona.com\" is forbidden: User \"system:serviceaccount:dev-mysql-db:percona-xtradb-cluster-operator\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"dev-mysql-db\"\n","stacktrace":"k8s.io/klog/v2.(*loggingT).output\n\t/go/src/github.com/percona/percona-xtradb-cluster-operator/vendor/k8s.io/klog/v2/klog.go:931\nk8s.io/klog/v2.(*loggingT).printf\n\t/go/src/github.com/percona/percona-xtradb-cluster-operator/vendor/k8s.io/klog/v2/klog.go:753\nk8s.io/klog/v2.Errorf\n\t/go/src/github.com/percona/percona-xtradb-cluster-operator/vendor/k8s.io/klog/v2/klog.go:1486\nk8s.io/client-go/tools/leaderelection.(*LeaderElector).tryAcquireOrRenew\n\t/go/src/github.com/percona/percona-xtradb-cluster-operator/vendor/k8s.io/client-go/tools/leaderelection/leaderelection.go:330\nk8s.io/client-go/tools/leaderelection.(*LeaderElector).acquire.func1\n\t/go/src/github.com/percona/percona-xtradb-cluster-operator/vendor/k8s.io/client-go/tools/leaderelection/leaderelection.go:250\nk8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1\n\t/go/src/github.com/percona/percona-xtradb-cluster-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:155\nk8s.io/apimachinery/pkg/util/wait.BackoffUntil\n\t/go/src/github.com/percona/percona-xtradb-cluster-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:156\nk8s.io/apimachinery/pkg/util/wait.JitterUntil\n\t/go/src/github.com/percona/percona-xtradb-cluster-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133\nk8s.io/client-go/tools/leaderelection.(*LeaderElector).acquire\n\t/go/src/github.com/percona/percona-xtradb-cluster-operator/vendor/k8s.io/client-go/tools/leaderelection/leaderelection.go:249\nk8s.io/client-go/tools/leaderelection.(*LeaderElector).Run\n\t/go/src/github.com/percona/percona-xtradb-cluster-operator/vendor/k8s.io/client-go/tools/leaderelection/leaderelection.go:206\nsigs.k8s.io/controller-runtime/pkg/manager.(*controllerManager).startLeaderElection.func3\n\t/go/src/github.com/percona/percona-xtradb-cluster-operator/vendor/sigs.k8s.io/controller-runtime/pkg/manager/internal.go:642"}
{"level":"error","ts":1655908054.498942,"msg":"error retrieving resource lock dev-mysql-db/08db1feb.percona.com: leases.coordination.k8s.io \"08db1feb.percona.com\" is forbidden: User \"system:serviceaccount:dev-mysql-db:percona-xtradb-cluster-operator\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"dev-mysql-db\"\n","stacktrace":"k8s.io/klog/v2.(*loggingT).output\n\t/go/src/github.com/percona/percona-xtradb-cluster-operator/vendor/k8s.io/klog/v2/klog.go:931\nk8s.io/klog/v2.(*loggingT).printf\n\t/go/src/github.com/percona/percona-xtradb-cluster-operator/vendor/k8s.io/klog/v2/klog.go:753\nk8s.io/klog/v2.Errorf\n\t/go/src/github.com/percona/percona-xtradb-cluster-operator/vendor/k8s.io/klog/v2/klog.go:1486\nk8s.io/client-go/tools/leaderelection.(*LeaderElector).tryAcquireOrRenew\n\t/go/src/github.com/percona/percona-xtradb-cluster-operator/vendor/k8s.io/client-go/tools/leaderelection/leaderelection.go:330\nk8s.io/client-go/tools/leaderelection.(*LeaderElector).acquire.func1\n\t/go/src/github.com/percona/percona-xtradb-cluster-operator/vendor/k8s.io/client-go/tools/leaderelection/leaderelection.go:250\nk8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1\n\t/go/src/github.com/percona/percona-xtradb-cluster-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:155\nk8s.io/apimachinery/pkg/util/wait.BackoffUntil\n\t/go/src/github.com/percona/percona-xtradb-cluster-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:156\nk8s.io/apimachinery/pkg/util/wait.JitterUntil\n\t/go/src/github.com/percona/percona-xtradb-cluster-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133\nk8s.io/client-go/tools/leaderelection.(*LeaderElector).acquire\n\t/go/src/github.com/percona/percona-xtradb-cluster-operator/vendor/k8s.io/client-go/tools/leaderelection/leaderelection.go:249\nk8s.io/client-go/tools/leaderelection.(*LeaderElector).Run\n\t/go/src/github.com/percona/percona-xtradb-cluster-operator/vendor/k8s.io/client-go/tools/leaderelection/leaderelection.go:206\nsigs.k8s.io/controller-runtime/pkg/manager.(*controllerManager).startLeaderElection.func3\n\t/go/src/github.com/percona/percona-xtradb-cluster-operator/vendor/sigs.k8s.io/controller-runtime/pkg/manager/internal.go:642"}
1 Like

I have reproduced your issue. I have started operator v1.10.0 and then changed image to v1.11.0 using the following command:

kubectl patch deployment percona-xtradb-cluster-operator \
  -p'{"spec":{"template":{"spec":{"containers":[{"name":"percona-xtradb-cluster-operator","image":"percona/percona-xtradb-cluster-operator:1.11.0"}]}}}}'

And in operator log I can see the same error as you see.

Please check the following doc Upgrade Database and Operator - Percona Operator for MySQL based on Percona XtraDB Cluster
When you perform the update of operator you should not skip the first step. You need to apply new rbac.yaml and crd.yaml.

1 Like

In GKE, we are upgrading the operator with downtime having no data loss.
Redeploying bundle.yaml and cr.yaml, It is configured with same service-id in both cr.yaml files.
Does that have the impact for this error?

1 Like

I did not catch the way how you update the operator. Could you please provide step by step instruction how you do it.

1 Like

File of V/1-10-0 ( running operator)

1. kubectl delete -f cr.yaml 
2. kubectl delete -f bundle.yaml

File of V/1-11-0 ( updated image of operator )

3. kubectl apply -f bundle.yaml
4. kubectl apply -f cr.yaml

same service-id under ( pxc.configuration ) is present in both cr.yaml files and also similar configurations. PVC remains the same during upgradation.
As a result, Operator is upgraded with downtime having no data loss. .

1 Like

Ok, looks like that you changed only image in bundle.yaml and used old bundle file. Also, I still can’t understand why you use this approach for the update. If you use the official way (which is described in our doc) the PVCs will not be removed too. And also new operator can work with old CR. And as soon as you are ready you can update your PXC cluster too. Please check it. Or maybe you have some specific configuration of your PXC cluster. In this case please provide me with your cr.yaml.

1 Like