Not able to take backup of my pxc XtraDB mysql cluster

I created PXC XTRADB mysql cluster by enable scheduled backup to LONGHORN & S3 bucket.

I finally end up with below error
024-03-22 05:26:37.991 INFO: PC protocol downgrade 1 → 0
2024-03-22 05:26:37.991 INFO: Current view of cluster as seen by this node
view ((empty))
2024-03-22 05:26:37.991 ERROR: failed to open gcomm backend connection: 110: failed to reach primary view (pc.wait_prim_timeout): 110 (Connection timed out)
at /mnt/jenkins/workspace/pxc80-autobuild-RELEASE/test/rpmbuild/BUILD/Percona-XtraDB-Cluster-8.0.35/percona-xtradb-cluster-galera/gcomm/src/pc.cpp:connect():176
2024-03-22 05:26:37.991 ERROR: /mnt/jenkins/workspace/pxc80-autobuild-RELEASE/test/rpmbuild/BUILD/Percona-XtraDB-Cluster-8.0.35/percona-xtradb-cluster-galera/gcs/src/gcs_core.cpp:gcs_core_open():219: Failed to open backend connection: -110 (Connection timed out)
2024-03-22 05:26:38.992 INFO: gcomm: terminating thread
2024-03-22 05:26:38.992 INFO: gcomm: joining thread
2024-03-22 05:26:38.992 ERROR: /mnt/jenkins/workspace/pxc80-autobuild-RELEASE/test/rpmbuild/BUILD/Percona-XtraDB-Cluster-8.0.35/percona-xtradb-cluster-galera/gcs/src/gcs.cpp:gcs_open():1880: Failed to open channel ‘cluster2-pxc’ at ‘gcomm://cluster2-pxc-2.cluster2-pxc?gmcast.listen_addr=tcp://0.0.0.0:4567’: -110 (Connection timed out)
2024-03-22 05:26:38.992 INFO: Shifting CLOSED → DESTROYED (TO: 0)
2024-03-22 05:26:38.992 FATAL: Garbd exiting with error: Failed to open connection to group: 110 (Connection timed out)
at /mnt/jenkins/workspace/pxc80-autobuild-RELEASE/test/rpmbuild/BUILD/Percona-XtraDB-Cluster-8.0.35/percona-xtradb-cluster-galera/garb/garb_gcs.cpp:Gcs():35

  • EXID_CODE=1
  • ‘[’ -f /tmp/backup-is-completed ‘]’
  • log ERROR ‘Backup was finished unsuccessfull’
    2024-03-22 05:26:38 [ERROR] Backup was finished unsuccessfull
  • exit 1

kubectl get all
NAME READY STATUS RESTARTS AGE
pod/cluster2-haproxy-0 2/2 Running 0 14m
pod/cluster2-haproxy-1 2/2 Running 0 12m
pod/cluster2-pxc-0 3/3 Running 0 14m
pod/cluster2-pxc-1 3/3 Running 0 12m
pod/cluster2-pxc-2 3/3 Running 0 11m
pod/percona-xtradb-cluster-operator-684b668948-7fb46 1/1 Running 0 18m
pod/xb-cron-cluster2-s3-us-west-202432252059-8fa30-5g8tf 0/1 Error 0 7m54s
pod/xb-cron-cluster2-s3-us-west-202432252059-8fa30-7npl2 0/1 Error 0 5m48s
pod/xb-cron-cluster2-s3-us-west-202432252059-8fa30-9hdpd 0/1 Error 0 5m5s
pod/xb-cron-cluster2-s3-us-west-202432252059-8fa30-9tprf 0/1 Error 0 3m40s
pod/xb-cron-cluster2-s3-us-west-202432252059-8fa30-9zjc4 0/1 Error 0 91s
pod/xb-cron-cluster2-s3-us-west-202432252059-8fa30-jww2b 0/1 Error 0 2m14s
pod/xb-cron-cluster2-s3-us-west-202432252059-8fa30-mfbgs 0/1 Error 0 8m37s
pod/xb-cron-cluster2-s3-us-west-202432252059-8fa30-nzbtq 0/1 Error 0 4m22s
pod/xb-cron-cluster2-s3-us-west-202432252059-8fa30-pmtfj 0/1 Error 0 2m58s
pod/xb-cron-cluster2-s3-us-west-202432252059-8fa30-r98lq 0/1 Error 0 6m30s
pod/xb-cron-cluster2-s3-us-west-202432252059-8fa30-w6dcb 0/1 Error 0 7m12s
pod/xb-cron-cluster2-s3-us-west-202432252559-8fa30-2wwn9 0/1 Error 0 2m56s
pod/xb-cron-cluster2-s3-us-west-202432252559-8fa30-5fw2j 0/1 Error 0 6m33s
pod/xb-cron-cluster2-s3-us-west-202432252559-8fa30-7b79b 0/1 Error 0 5m50s
pod/xb-cron-cluster2-s3-us-west-202432252559-8fa30-8bb8g 0/1 Error 0 10m
pod/xb-cron-cluster2-s3-us-west-202432252559-8fa30-8lh2k 0/1 Error 0 8m43s
pod/xb-cron-cluster2-s3-us-west-202432252559-8fa30-bhwzp 0/1 Error 0 7m16s
pod/xb-cron-cluster2-s3-us-west-202432252559-8fa30-g4m4t 0/1 Error 0 4m23s
pod/xb-cron-cluster2-s3-us-west-202432252559-8fa30-mqtvp 0/1 Error 0 5m6s
pod/xb-cron-cluster2-s3-us-west-202432252559-8fa30-mt4vz 0/1 Error 0 9m27s
pod/xb-cron-cluster2-s3-us-west-202432252559-8fa30-pgsxx 0/1 Error 0 3m40s
pod/xb-cron-cluster2-s3-us-west-202432252559-8fa30-qpcpk 0/1 Error 0 7m59s
pod/xb-cron-cluster2-s3-us-west-202432253059-8fa30-4t6q2 0/1 Error 0 3m44s
pod/xb-cron-cluster2-s3-us-west-202432253059-8fa30-7nbbc 1/1 Running 0 11s
pod/xb-cron-cluster2-s3-us-west-202432253059-8fa30-drz26 0/1 Error 0 54s
pod/xb-cron-cluster2-s3-us-west-202432253059-8fa30-f4kb6 0/1 Error 0 2m19s
pod/xb-cron-cluster2-s3-us-west-202432253059-8fa30-gp45d 0/1 Error 0 3m1s
pod/xb-cron-cluster2-s3-us-west-202432253059-8fa30-h65f2 0/1 Error 0 4m26s
pod/xb-cron-cluster2-s3-us-west-202432253059-8fa30-lvhwx 0/1 Error 0 97s
pod/xb-cron-cluster2-s3-us-west-202432253059-8fa30-tqnkr 0/1 Error 0 5m9s
pod/xb-cron-cluster2-s3-us-west-202432253559-8fa30-hhlqn 1/1 Running 0 8s

NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/cluster2-haproxy ClusterIP 10.107.169.175 3306/TCP,3309/TCP,33062/TCP,33060/TCP 14m
service/cluster2-haproxy-replicas ClusterIP 10.111.108.128 3306/TCP 14m
service/cluster2-pxc ClusterIP None 3306/TCP,33062/TCP,33060/TCP 14m
service/cluster2-pxc-unready ClusterIP None 3306/TCP,33062/TCP,33060/TCP 14m
service/percona-xtradb-cluster-operator ClusterIP 10.111.109.155 443/TCP 18m

NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/percona-xtradb-cluster-operator 1/1 1 1 18m

NAME DESIRED CURRENT READY AGE
replicaset.apps/percona-xtradb-cluster-operator-684b668948 1 1 1 18m

NAME READY AGE
statefulset.apps/cluster2-haproxy 2/2 14m
statefulset.apps/cluster2-pxc 3/3 14m

NAME COMPLETIONS DURATION AGE
job.batch/xb-cron-cluster2-s3-us-west-202432252059-8fa30 0/1 8m38s 8m38s
job.batch/xb-cron-cluster2-s3-us-west-202432252559-8fa30 0/1 10m 10m
job.batch/xb-cron-cluster2-s3-us-west-202432253059-8fa30 0/1 5m10s 5m10s
job.batch/xb-cron-cluster2-s3-us-west-202432253559-8fa30 0/1 9s 9s

From Operator logs
2024-03-22T05:36:58.725Z ERROR Reconciler error {“controller”: “pxc-controller”, “namespace”: “test-cr-percona”, “name”: “cluster2”, “reconcileID”: “95d1262e-78cb-4eea-8d0e-217cec24fc24”, “error”: “get binlog collector deployment for cluster ‘cluster2’: get storage envs: s3-us-west storage has unsupported type aws”, “errorVerbose”: “get binlog collector deployment for cluster ‘cluster2’: get storage envs: s3-us-west storage has unsupported type aws\ngithub.com/percona/percona-xtradb-cluster-operator/pkg/controller/pxc.(*ReconcilePerconaXtraDBCluster).reconcileBackups\n\t/go/src/github.com/percona/percona-xtradb-cluster-operator/pkg/controller/pxc/backup.go:47\ngithub.com/percona/percona-xtradb-cluster-operator/pkg/controller/pxc.(*ReconcilePerconaXtraDBCluster).Reconcile\n\t/go/src/github.com/percona/percona-xtradb-cluster-operator/pkg/controller/pxc/controller.go:426\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Reconcile\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.17.0/pkg/internal/controller/controller.go:119\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.17.0/pkg/internal/controller/controller.go:316\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.17.0/pkg/internal/controller/controller.go:266\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func2.2\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.17.0/pkg/internal/controller/controller.go:227\nruntime.goexit\n\t/usr/local/go/src/runtime/asm_amd64.s:1650”}

apiVersion: pxc.percona.com/v1
kind: PerconaXtraDBCluster
metadata:
  name: cluster2
  finalizers:
    - delete-pxc-pods-in-order
#    - delete-ssl
#    - delete-proxysql-pvc
#    - delete-pxc-pvc
#  annotations:
#    percona.com/issue-vault-token: "true"
spec:
  crVersion: 1.14.0
#  ignoreAnnotations:
#    - iam.amazonaws.com/role
#  ignoreLabels:
#    - rack
  secretsName: cluster2-secrets
  vaultSecretName: keyring-secret-vault
  sslSecretName: cluster2-ssl
  sslInternalSecretName: cluster2-ssl-internal
  logCollectorSecretName: cluster2-log-collector-secrets
  initContainer:
    image: docker.io/perconalab/percona-xtradb-cluster-operator:main
    resources:
      requests:
        memory: 100M
        cpu: 100m
      limits:
        memory: 200M
        cpu: 200m
#  enableCRValidationWebhook: true
#  tls:
#    SANs:
#      - pxc-1.example.com
#      - pxc-2.example.com
#      - pxc-3.example.com
#    issuerConf:
#      name: special-selfsigned-issuer
#      kind: ClusterIssuer
#      group: cert-manager.io
  allowUnsafeConfigurations: false
#  pause: false
  updateStrategy: SmartUpdate
  upgradeOptions:
    versionServiceEndpoint: https://check.percona.com
    apply: disabled
    schedule: "0 4 * * *"
  pxc:
    size: 1
    image: docker.io/perconalab/percona-xtradb-cluster-operator:main-pxc8.0
    autoRecovery: true
#    expose:
#      enabled: true
#      type: LoadBalancer
#      externalTrafficPolicy: Local
#      internalTrafficPolicy: Local
#      loadBalancerSourceRanges:
#        - 10.0.0.0/8
#      loadBalancerIP: 127.0.0.1
#      annotations:
#        networking.gke.io/load-balancer-type: "Internal"
#      labels:
#        rack: rack-22
#    replicationChannels:
#    - name: pxc1_to_pxc2
#      isSource: true
#    - name: pxc2_to_pxc1
#      isSource: false
#      configuration:
#        sourceRetryCount: 3
#        sourceConnectRetry: 60
#        ssl: false
#        sslSkipVerify: true
#        ca: '/etc/mysql/ssl/ca.crt'
#      sourcesList:
#      - host: 10.95.251.101
#        port: 3306
#        weight: 100
#    schedulerName: mycustom-scheduler
#    readinessDelaySec: 15
#    livenessDelaySec: 600
#    configuration: |
#      [mysqld]
#      socket.ssl=YES   
#      wsrep_debug=CLIENT
#      wsrep_provider_options="gcache.size=1G; gcache.recover=yes"
#      [sst]
#      xbstream-opts=--decompress
#      [xtrabackup]
#      compress=lz4
#      for PXC 5.7
#      [xtrabackup]
#      compress
#    imagePullSecrets:
#      - name: private-registry-credentials
#    priorityClassName: high-priority
#    annotations:
#      iam.amazonaws.com/role: role-arn
#    labels:
#      rack: rack-22
#    readinessProbes:
#      initialDelaySeconds: 15
#      timeoutSeconds: 15
#      periodSeconds: 30
#      successThreshold: 1
#      failureThreshold: 5
#    livenessProbes:
#      initialDelaySeconds: 300
#      timeoutSeconds: 5
#      periodSeconds: 10
#      successThreshold: 1
#      failureThreshold: 3
#    containerSecurityContext:
#      privileged: false
#    podSecurityContext:
#      runAsUser: 1001
#      runAsGroup: 1001
#      supplementalGroups: [1001]
#    serviceAccountName: percona-xtradb-cluster-operator-workload
#    imagePullPolicy: Always
#    runtimeClassName: image-rc
#    sidecars:
#    - image: busybox
#      command: ["/bin/sh"]
#      args: ["-c", "while true; do trap 'exit 0' SIGINT SIGTERM SIGQUIT SIGKILL; done;"]
#      name: my-sidecar-1
#      resources:
#        requests:
#          memory: 100M
#          cpu: 100m
#        limits:
#          memory: 200M
#          cpu: 200m
#    envVarsSecret: my-env-var-secrets
    resources:
      requests:
        memory: 1G
        cpu: 600m
#        ephemeral-storage: 1G
#      limits:
#        memory: 1G
#        cpu: "1"
#        ephemeral-storage: 1G
#    nodeSelector:
#      disktype: ssd
#    topologySpreadConstraints:
#    - labelSelector:
#        matchLabels:
#          app.kubernetes.io/name: percona-xtradb-cluster
#      maxSkew: 1
#      topologyKey: kubernetes.io/hostname
#      whenUnsatisfiable: DoNotSchedule
    affinity:
      antiAffinityTopologyKey: "kubernetes.io/hostname"
#      advanced:
#        nodeAffinity:
#          requiredDuringSchedulingIgnoredDuringExecution:
#            nodeSelectorTerms:
#            - matchExpressions:
#              - key: kubernetes.io/e2e-az-name
#                operator: In
#                values:
#                - e2e-az1
#                - e2e-az2
#    tolerations:
#    - key: "node.alpha.kubernetes.io/unreachable"
#      operator: "Exists"
#      effect: "NoExecute"
#      tolerationSeconds: 6000
    podDisruptionBudget:
      maxUnavailable: 1
#      minAvailable: 0
    volumeSpec:
#      emptyDir: {}
#      hostPath:
#        path: /data
#        type: Directory
      persistentVolumeClaim:
        storageClassName: longhorn
        accessModes: [ "ReadWriteOnce" ]
        resources:
          requests:
            storage: 6G
    gracePeriod: 600
#    lifecycle:
#      preStop:
#        exec:
#          command: [ "/bin/true" ]
#      postStart:
#        exec:
#          command: [ "/bin/true" ]
  haproxy:
    enabled: true
    size: 1
    image: docker.io/perconalab/percona-xtradb-cluster-operator:main-haproxy
#    imagePullPolicy: Always
#    schedulerName: mycustom-scheduler
#    readinessDelaySec: 15
#    livenessDelaySec: 600
#    configuration: |
#
#    the actual default configuration file can be found here https://github.com/percona/percona-docker/blob/main/haproxy/dockerdir/etc/haproxy/haproxy-global.cfg
#
#      global
#        maxconn 2048
#        external-check
#        insecure-fork-wanted
#        stats socket /etc/haproxy/pxc/haproxy.sock mode 600 expose-fd listeners level admin
#
#      defaults
#        default-server init-addr last,libc,none
#        log global
#        mode tcp
#        retries 10
#        timeout client 28800s
#        timeout connect 100500
#        timeout server 28800s
#
#      resolvers kubernetes
#        parse-resolv-conf
#
#      frontend galera-in
#        bind *:3309 accept-proxy
#        bind *:3306
#        mode tcp
#        option clitcpka
#        default_backend galera-nodes
#
#      frontend galera-admin-in
#        bind *:33062
#        mode tcp
#        option clitcpka
#        default_backend galera-admin-nodes
#
#      frontend galera-replica-in
#        bind *:3307
#        mode tcp
#        option clitcpka
#        default_backend galera-replica-nodes
#
#      frontend galera-mysqlx-in
#        bind *:33060
#        mode tcp
#        option clitcpka
#        default_backend galera-mysqlx-nodes
#
#      frontend stats
#        bind *:8404
#        mode http
#        option http-use-htx
#        http-request use-service prometheus-exporter if { path /metrics }
#    imagePullSecrets:
#      - name: private-registry-credentials
#    annotations:
#      iam.amazonaws.com/role: role-arn
#    labels:
#      rack: rack-22
#    readinessProbes:
#      initialDelaySeconds: 15
#      timeoutSeconds: 1
#      periodSeconds: 5
#      successThreshold: 1
#      failureThreshold: 3
#    livenessProbes:
#      initialDelaySeconds: 60
#      timeoutSeconds: 5
#      periodSeconds: 30
#      successThreshold: 1
#      failureThreshold: 4
#    exposePrimary:
#      enabled: false
#      type: ClusterIP
#      annotations:
#        service.beta.kubernetes.io/aws-load-balancer-backend-protocol: tcp
#      externalTrafficPolicy: Cluster
#      internalTrafficPolicy: Cluster
#      labels:
#        rack: rack-22
#      loadBalancerSourceRanges:
#        - 10.0.0.0/8
#      loadBalancerIP: 127.0.0.1
#    exposeReplicas:
#      enabled: false
#      type: ClusterIP
#      annotations:
#        service.beta.kubernetes.io/aws-load-balancer-backend-protocol: tcp
#      externalTrafficPolicy: Cluster
#      internalTrafficPolicy: Cluster
#      labels:
#        rack: rack-22
#      loadBalancerSourceRanges:
#        - 10.0.0.0/8
#      loadBalancerIP: 127.0.0.1
#    runtimeClassName: image-rc
#    sidecars:
#    - image: busybox
#      command: ["/bin/sh"]
#      args: ["-c", "while true; do trap 'exit 0' SIGINT SIGTERM SIGQUIT SIGKILL; done;"]
#      name: my-sidecar-1
#      resources:
#        requests:
#          memory: 100M
#          cpu: 100m
#        limits:
#          memory: 200M
#          cpu: 200m
#    envVarsSecret: my-env-var-secrets
    resources:
      requests:
        memory: 1G
        cpu: 600m
#      limits:
#        memory: 1G
#        cpu: 700m
#    priorityClassName: high-priority
#    nodeSelector:
#      disktype: ssd
#    sidecarResources:
#      requests:
#        memory: 1G
#        cpu: 500m
#      limits:
#        memory: 2G
#        cpu: 600m
#    containerSecurityContext:
#      privileged: false
#    podSecurityContext:
#      runAsUser: 1001
#      runAsGroup: 1001
#      supplementalGroups: [1001]
#    serviceAccountName: percona-xtradb-cluster-operator-workload
#    topologySpreadConstraints:
#    - labelSelector:
#        matchLabels:
#          app.kubernetes.io/name: percona-xtradb-cluster
#      maxSkew: 1
#      topologyKey: kubernetes.io/hostname
#      whenUnsatisfiable: DoNotSchedule
    affinity:
      antiAffinityTopologyKey: "kubernetes.io/hostname"
#      advanced:
#        nodeAffinity:
#          requiredDuringSchedulingIgnoredDuringExecution:
#            nodeSelectorTerms:
#            - matchExpressions:
#              - key: kubernetes.io/e2e-az-name
#                operator: In
#                values:
#                - e2e-az1
#                - e2e-az2
#    tolerations:
#    - key: "node.alpha.kubernetes.io/unreachable"
#      operator: "Exists"
#      effect: "NoExecute"
#      tolerationSeconds: 6000
    podDisruptionBudget:
      maxUnavailable: 1
#      minAvailable: 0
    gracePeriod: 30
#    lifecycle:
#      preStop:
#        exec:
#          command: [ "/bin/true" ]
#      postStart:
#        exec:
#          command: [ "/bin/true" ]
  proxysql:
    enabled: false
    size: 3
    image: perconalab/percona-xtradb-cluster-operator:main-proxysql
#    imagePullPolicy: Always
#    configuration: |
#      datadir="/var/lib/proxysql"
#
#      admin_variables =
#      {
#        admin_credentials="proxyadmin:admin_password"
#        mysql_ifaces="0.0.0.0:6032"
#        refresh_interval=2000
#
#        cluster_username="proxyadmin"
#        cluster_password="admin_password"
#        checksum_admin_variables=false
#        checksum_ldap_variables=false
#        checksum_mysql_variables=false
#        cluster_check_interval_ms=200
#        cluster_check_status_frequency=100
#        cluster_mysql_query_rules_save_to_disk=true
#        cluster_mysql_servers_save_to_disk=true
#        cluster_mysql_users_save_to_disk=true
#        cluster_proxysql_servers_save_to_disk=true
#        cluster_mysql_query_rules_diffs_before_sync=1
#        cluster_mysql_servers_diffs_before_sync=1
#        cluster_mysql_users_diffs_before_sync=1
#        cluster_proxysql_servers_diffs_before_sync=1
#      }
#
#      mysql_variables=
#      {
#        monitor_password="monitor"
#        monitor_galera_healthcheck_interval=1000
#        threads=2
#        max_connections=2048
#        default_query_delay=0
#        default_query_timeout=10000
#        poll_timeout=2000
#        interfaces="0.0.0.0:3306"
#        default_schema="information_schema"
#        stacksize=1048576
#        connect_timeout_server=10000
#        monitor_history=60000
#        monitor_connect_interval=20000
#        monitor_ping_interval=10000
#        ping_timeout_server=200
#        commands_stats=true
#        sessions_sort=true
#        have_ssl=true
#        ssl_p2s_ca="/etc/proxysql/ssl-internal/ca.crt"
#        ssl_p2s_cert="/etc/proxysql/ssl-internal/tls.crt"
#        ssl_p2s_key="/etc/proxysql/ssl-internal/tls.key"
#        ssl_p2s_cipher="ECDHE-RSA-AES128-GCM-SHA256"
#      }
#    readinessDelaySec: 15
#    livenessDelaySec: 600
#    schedulerName: mycustom-scheduler
#    imagePullSecrets:
#      - name: private-registry-credentials
#    annotations:
#      iam.amazonaws.com/role: role-arn
#    labels:
#      rack: rack-22
#    expose:
#      enabled: false
#      type: ClusterIP
#      annotations:
#        service.beta.kubernetes.io/aws-load-balancer-backend-protocol: tcp
#      externalTrafficPolicy: Cluster
#      internalTrafficPolicy: Cluster
#      labels:
#        rack: rack-22
#      loadBalancerSourceRanges:
#        - 10.0.0.0/8
#      loadBalancerIP: 127.0.0.1
#    runtimeClassName: image-rc
#    sidecars:
#    - image: busybox
#      command: ["/bin/sh"]
#      args: ["-c", "while true; do trap 'exit 0' SIGINT SIGTERM SIGQUIT SIGKILL; done;"]
#      name: my-sidecar-1
#      resources:
#        requests:
#          memory: 100M
#          cpu: 100m
#        limits:
#          memory: 200M
#          cpu: 200m
#    envVarsSecret: my-env-var-secrets
    resources:
      requests:
        memory: 1G
        cpu: 600m
#      limits:
#        memory: 1G
#        cpu: 700m
#    priorityClassName: high-priority
#    nodeSelector:
#      disktype: ssd
#    sidecarResources:
#      requests:
#        memory: 1G
#        cpu: 500m
#      limits:
#        memory: 2G
#        cpu: 600m
#    containerSecurityContext:
#      privileged: false
#    podSecurityContext:
#      runAsUser: 1001
#      runAsGroup: 1001
#      supplementalGroups: [1001]
#    serviceAccountName: percona-xtradb-cluster-operator-workload
#    topologySpreadConstraints:
#    - labelSelector:
#        matchLabels:
#          app.kubernetes.io/name: percona-xtradb-cluster-operator
#      maxSkew: 1
#      topologyKey: kubernetes.io/hostname
#      whenUnsatisfiable: DoNotSchedule
    affinity:
      antiAffinityTopologyKey: "kubernetes.io/hostname"
#      advanced:
#        nodeAffinity:
#          requiredDuringSchedulingIgnoredDuringExecution:
#            nodeSelectorTerms:
#            - matchExpressions:
#              - key: kubernetes.io/e2e-az-name
#                operator: In
#                values:
#                - e2e-az1
#                - e2e-az2
#    tolerations:
#    - key: "node.alpha.kubernetes.io/unreachable"
#      operator: "Exists"
#      effect: "NoExecute"
#      tolerationSeconds: 6000
    volumeSpec:
#      emptyDir: {}
#      hostPath:
#        path: /data
#        type: Directory
      persistentVolumeClaim:
#        storageClassName: standard
#        accessModes: [ "ReadWriteOnce" ]
        resources:
          requests:
            storage: 2G
    podDisruptionBudget:
      maxUnavailable: 1
#      minAvailable: 0
    gracePeriod: 30
#    lifecycle:
#      preStop:
#        exec:
#          command: [ "/bin/true" ]
#      postStart:
#        exec:
#          command: [ "/bin/true" ]
#   loadBalancerSourceRanges:
#     - 10.0.0.0/8
  logcollector:
    enabled: true
    image: perconalab/percona-xtradb-cluster-operator:main-logcollector
#    configuration: |
#      [OUTPUT]
#           Name  es
#           Match *
#           Host  192.168.2.3
#           Port  9200
#           Index my_index
#           Type  my_type
    resources:
      requests:
        memory: 100M
        cpu: 200m
  pmm:
    enabled: false
    image: docker.io/percona/pmm-client:2.41.1
    serverHost: monitoring-service
#    serverUser: admin
#    pxcParams: "--disable-tablestats-limit=2000"
#    proxysqlParams: "--custom-labels=CUSTOM-LABELS"
#    containerSecurityContext:
#      privileged: false
    resources:
      requests:
        memory: 150M
        cpu: 300m
  backup:
#    allowParallel: true
    image: docker.io/perconalab/percona-xtradb-cluster-operator:main-pxc8.0-backup
#    backoffLimit: 6
    serviceAccountName: percona-xtradb-cluster-operator
#    imagePullSecrets:
#      - name: private-registry-credentials
    pitr:
      enabled: true
      storageName: s3-us-west
      timeBetweenUploads: 60
      timeoutSeconds: 60
#      resources:
#        requests:
#          memory: 0.1G
#          cpu: 100m
#        limits:
#          memory: 1G
#          cpu: 700m
    storages:
#      backup: 
      s3-us-west:
          type: aws
          bucket: backup-percona
          credentialsSecret: my-cluster-name-backup-s3
          region: us-east-2
#      azure-blob:
#        type: azure
#        azure:
#          credentialsSecret: azure-secret
#          container: test
#          endpointUrl: https://accountName.blob.core.windows.net
#          storageClass: Hot
#      fs-pvc:
#        type: filesystem
#        nodeSelector:
#          storage: tape
#          backupWorker: 'True'
#        resources:
#          requests:
#            memory: 1G
#            cpu: 600m
#        topologySpreadConstraints:
#        - labelSelector:
#            matchLabels:
#              app.kubernetes.io/name: percona-xtradb-cluster
#          maxSkew: 1
#          topologyKey: kubernetes.io/hostname
#          whenUnsatisfiable: DoNotSchedule
#        affinity:
#          nodeAffinity:
#            requiredDuringSchedulingIgnoredDuringExecution:
#              nodeSelectorTerms:
#              - matchExpressions:
#                - key: backupWorker
#                  operator: In
#                  values:
#                  - 'True'
#        tolerations:
#          - key: "backupWorker"
#            operator: "Equal"
#            value: "True"
#            effect: "NoSchedule"
#        annotations:
#          testName: scheduled-backup
#        labels:
#          backupWorker: 'True'
#        schedulerName: 'default-scheduler'
#        priorityClassName: 'high-priority'
#        containerSecurityContext:
#          privileged: true
#        podSecurityContext:
#          fsGroup: 1001
#          supplementalGroups: [1001, 1002, 1003]
#        volume:
#          persistentVolumeClaim:
#            storageClassName: longhorn
#            accessModes: [ "ReadWriteOnce" ]
#            resources:
#              requests:
#                storage: 6G
    schedule:
      - name: "daily-backup"
        schedule: "*/5 * * * *"
        keep: 3
        storageName: s3-us-west
#      - name: "daily-backup"
#        schedule: "0 0 * * *"
#        keep: 5
#        storageName: backup

Deployment launching 2 ha-Roxy and 3 mysql pods .Even I changed my cr.yaml by reducing pxc replica to 1 & ha-proxy to 1

Please help
As we are looking for some stable mysql cluster.

Hello ,

I ran into below issue when I use storage class as “longhorn”.

ERROR:
ERROR Reconciler error {“controller”: “pxc-controller”, “namespace”: “test-cr-percona”, “name”: “cluster2”, “reconcileID”: “ee09dd99-a4c5-4c3f-92c7-3ed950185c94”, “error”: "get binlog collector deployment for cluster ‘cluster2’: get storage envs: fs-pvc storage has unsupported type ", “errorVerbose”: "get binlog collector deployment for cluster ‘cluster2’: get storage envs: fs-pvc storage has unsupported type

CODE:
backup:

allowParallel: true

image: docker.io/perconalab/percona-xtradb-cluster-operator:main-pxc8.0-backup

backoffLimit: 6

serviceAccountName: percona-xtradb-cluster-operator

allowUnsafeConfigurations: true

imagePullSecrets:

- name: private-registry-credentials

pitr:
  enabled: true
  storageName: fs-pvc
  timeBetweenUploads: 60
  timeoutSeconds: 60
storages:
  fs-pvc:
    type: Ext4
    volume:
      persistentVolumeClaim:
        storageClassName: longhorn
        accessModes: [ "ReadWriteOnce" ]
        resources:
          requests:
            storage: 6G
  - name: "daily-backup"
    schedule: "*/10 * * * *"
    keep: 5
    storageName: fs-pvc

Could you please help me on this .