Percona XtraDB Operator version 8.0 fail to backup both s3 and local storage

Description:

I am fail to run backup percona extradb operator manual and cron on private kubenetes microk8s. How do I fix this error?

pxc-backup-minio.yaml

apiVersion: pxc.percona.com/v1
kind: PerconaXtraDBClusterBackup
metadata:
  finalizers:
    - percona.com/delete-backup
  name: backup-minio
spec:
  pxcCluster: cluster
  storageName: minio

Steps to Reproduce:

run this command to start backup manual
k -n pxc apply -f pxc-backup-minio.yaml

check backup result
sh> k -n pxc get pxc-backup

NAME CLUSTER STORAGE DESTINATION STATUS COMPLETED AGE
cron-cluster-minio-proen-20249120048-372f8 cluster minio-proen s3://backup/cluster-2024-09-12-00:00:48-full Failed 3h47m
backup-minio cluster minio-proen s3://backup/cluster-2024-09-12-03:34:00-full Failed 14m

Version:

pxc image
perconalab/percona-xtradb-cluster-operator:main-pxc8.0
backup image
perconalab/percona-xtradb-cluster-operator:main-pxc8.0-backup

Logs:

FATAL: /mnt/jenkins/workspace/pxc80-autobuild-RELEASE/test/rpmbuild/BUILD/Percona-XtraDB-Cluster-8.0.36/percona-xtradb-cluster-galera/gcs/src/gcs_group.cpp:group_check_proto_ver():341: Group requested gcs_proto_ver: 4, max supported by this node: 2.Upgrade the node before joining this group.Need to abort.

Defaulted container “xtrabackup” out of: xtrabackup, backup-init (init)

  • LIB_PATH=/usr/lib/pxc
  • . /usr/lib/pxc/backup.sh
    ++ set -o errexit
    ++ SST_INFO_NAME=sst_info
    ++ XBCLOUD_ARGS='–curl-retriable-errors=7 ’
    ++ INSECURE_ARG=
    ++ ‘[’ -n true ‘]’
    ++ [[ true == \f\a\l\s\e ]]
    ++ S3_BUCKET_PATH=cluster-2024-09-12-03:34:00-full
    +++ date +%F-%H-%M
    ++ BACKUP_PATH=cluster-pxc-2024-09-12-03-36-xtrabackup.stream
  • GARBD_OPTS=
  • check_ssl
  • CA=/var/run/secrets/kubernetes.io/serviceaccount/ca.crt
  • ‘[’ -f /var/run/secrets/kubernetes.io/serviceaccount/service-ca.crt ‘]’
  • SSL_DIR=/etc/mysql/ssl
  • ‘[’ -f /etc/mysql/ssl/ca.crt ‘]’
  • CA=/etc/mysql/ssl/ca.crt
  • SSL_INTERNAL_DIR=/etc/mysql/ssl-internal
  • ‘[’ -f /etc/mysql/ssl-internal/ca.crt ‘]’
  • CA=/etc/mysql/ssl-internal/ca.crt
  • KEY=/etc/mysql/ssl/tls.key
  • CERT=/etc/mysql/ssl/tls.crt
  • ‘[’ -f /etc/mysql/ssl-internal/tls.key -a -f /etc/mysql/ssl-internal/tls.crt ‘]’
  • KEY=/etc/mysql/ssl-internal/tls.key
  • CERT=/etc/mysql/ssl-internal/tls.crt
  • ‘[’ -f /etc/mysql/ssl-internal/ca.crt -a -f /etc/mysql/ssl-internal/tls.key -a -f /etc/mysql/ssl-internal/tls.crt ‘]’
  • GARBD_OPTS=‘socket.ssl_ca=/etc/mysql/ssl-internal/ca.crt;socket.ssl_cert=/etc/mysql/ssl-internal/tls.crt;socket.ssl_key=/etc/mysql/ssl-internal/tls.key;socket.ssl_cipher=;pc.weight=0;’
  • ‘[’ -n backup ‘]’
  • clean_backup_s3
  • mc_add_bucket_dest
  • echo '+ mc -C /tmp/mc config host add dest https://s3.aspiredigitalgroup.com.au ACCESS_KEY_ID SECRET_ACCESS_KEY ’
  • mc -C /tmp/mc config host add dest https://s3.aspiredigitalgroup.com.au ACCESS_KEY_ID SECRET_ACCESS_KEY
    Added dest successfully.
  • is_object_exist backup cluster-2024-09-12-03:34:00-full.sst_info
  • local bucket=backup
  • local object=cluster-2024-09-12-03:34:00-full.sst_info
    ++ mc -C /tmp/mc --json ls dest/backup/cluster-2024-09-12-03:34:00-full.sst_info
    ++ jq .status
  • [[ -n ‘’ ]]
  • is_object_exist backup cluster-2024-09-12-03:34:00-full/
  • local bucket=backup
  • local object=cluster-2024-09-12-03:34:00-full/
    ++ mc -C /tmp/mc --json ls dest/backup/cluster-2024-09-12-03:34:00-full/
    ++ jq .status
  • [[ -n ‘’ ]]
  • request_streaming
    ++ hostname -i
    ++ sed -E ‘s/.\b([0-9]{1,3}.[0-9]{1,3}.[0-9]{1,3}.[0-9]{1,3})\b./\1/’
  • local LOCAL_IP=10.1.167.63
    ++ get_backup_source
    +++ /opt/percona/peer-list -on-start=/usr/bin/get-pxc-state -service=cluster-pxc
    +++ grep wsrep_cluster_size
    +++ sort
    +++ tail -1
    +++ cut -d : -f 12
    ++ CLUSTER_SIZE=3
    ++ ‘[’ -z 3 ‘]’
    +++ /opt/percona/peer-list -on-start=/usr/bin/get-pxc-state -service=cluster-pxc
    +++ grep wsrep_ready:ON:wsrep_connected:ON:wsrep_local_state_comment:Synced:wsrep_cluster_status:Primary
    +++ sort -r
    +++ tail -1
    +++ cut -d : -f 2
    +++ cut -d . -f 1
    ++ FIRST_NODE=cluster-pxc-0
    ++ SKIP_FIRST_POD=‘|’
    ++ (( 3 > 1 ))
    ++ SKIP_FIRST_POD=cluster-pxc-0
    ++ /opt/percona/peer-list -on-start=/usr/bin/get-pxc-state -service=cluster-pxc
    ++ grep wsrep_ready:ON:wsrep_connected:ON:wsrep_local_state_comment:Synced:wsrep_cluster_status:Primary
    ++ grep -v cluster-pxc-0
    ++ sort
    ++ tail -1
    ++ cut -d : -f 2
    ++ cut -d . -f 1
  • local NODE_NAME=cluster-pxc-2
  • ‘[’ -z cluster-pxc-2 ‘]’
  • set +o errexit
  • log INFO ‘Garbd was started’
    2024-09-12 03:36:11 [INFO] Garbd was started
  • garbd --address ‘gcomm://cluster-pxc-2.cluster-pxc?gmcast.listen_addr=tcp://0.0.0.0:4567’ --donor cluster-pxc-2 --group cluster-pxc --options ‘socket.ssl_ca=/etc/mysql/ssl-internal/ca.crt;socket.ssl_cert=/etc/mysql/ssl-internal/tls.crt;socket.ssl_key=/etc/mysql/ssl-internal/tls.key;socket.ssl_cipher=;pc.weight=0;’ --sst xtrabackup-v2:10.1.167.63:4444/xtrabackup_sst//1 --recv-script=/usr/bin/run_backup.sh
    2024-09-12 03:36:11.590 INFO: CRC-32C: using 64-bit x86 acceleration.
    2024-09-12 03:36:11.590 INFO: Read config:
    daemon: 0
    name: garb
    address: gcomm://cluster-pxc-2.cluster-pxc?gmcast.listen_addr=tcp://0.0.0.0:4567
    group: cluster-pxc
    sst: xtrabackup-v2:10.1.167.63:4444/xtrabackup_sst//1
    donor: cluster-pxc-2
    options: socket.ssl_ca=/etc/mysql/ssl-internal/ca.crt;socket.ssl_cert=/etc/mysql/ssl-internal/tls.crt;socket.ssl_key=/etc/mysql/ssl-internal/tls.key;socket.ssl_cipher=;pc.weight=0;; gcs.fc_limit=9999999; gcs.fc_factor=1.0; gcs.fc_single_primary=yes; socket.ssl=YES
    cfg:
    log:
    recv_script: /usr/bin/run_backup.sh
    workdir:

2024-09-12 03:36:11.591 WARN: SSL compression is not effective. The option socket.ssl_compression is deprecated and will be removed in future releases.
2024-09-12 03:36:11.591 WARN: Parameter ‘socket.ssl_compression’ is deprecated and will be removed in future versions
2024-09-12 03:36:11.598 INFO: protonet asio version 0
2024-09-12 03:36:11.600 INFO: Using CRC-32C for message checksums.
2024-09-12 03:36:11.600 INFO: backend: asio
2024-09-12 03:36:11.600 INFO: gcomm thread scheduling priority set to other:0
2024-09-12 03:36:11.600 INFO: Fail to access the file (./gvwstate.dat) error (No such file or directory). It is possible if node is booting for first time or re-booting after a graceful shutdown
2024-09-12 03:36:11.600 INFO: Restoring primary-component from disk failed. Either node is booting for first time or re-booting after a graceful shutdown
2024-09-12 03:36:11.600 INFO: GMCast version 0
2024-09-12 03:36:11.601 INFO: (27cfeea8-badb, ‘ssl://0.0.0.0:4567’) listening at ssl://0.0.0.0:4567
2024-09-12 03:36:11.601 INFO: (27cfeea8-badb, ‘ssl://0.0.0.0:4567’) multicast: , ttl: 1
2024-09-12 03:36:11.602 INFO: EVS version 1
2024-09-12 03:36:11.602 INFO: gcomm: connecting to group ‘cluster-pxc’, peer ‘cluster-pxc-2.cluster-pxc:’
2024-09-12 03:36:11.611 INFO: (27cfeea8-badb, ‘ssl://0.0.0.0:4567’) connection established to af8cc657-b8d9 ssl://10.1.109.118:4567
2024-09-12 03:36:11.611 INFO: (27cfeea8-badb, ‘ssl://0.0.0.0:4567’) turning message relay requesting on, nonlive peers: ssl://10.1.106.165:4567 ssl://10.1.9.71:4567
2024-09-12 03:36:11.718 INFO: (27cfeea8-badb, ‘ssl://0.0.0.0:4567’) connection established to cee986a2-a54c ssl://10.1.9.71:4567
2024-09-12 03:36:11.730 INFO: (27cfeea8-badb, ‘ssl://0.0.0.0:4567’) connection established to ea92447e-b4de ssl://10.1.106.165:4567
2024-09-12 03:36:11.730 INFO: (27cfeea8-badb, ‘ssl://0.0.0.0:4567’) connection established to ea92447e-b4de ssl://10.1.106.165:4567
2024-09-12 03:36:12.104 INFO: EVS version upgrade 0 → 1
2024-09-12 03:36:12.104 INFO: declaring af8cc657-b8d9 at ssl://10.1.109.118:4567 stable
2024-09-12 03:36:12.104 INFO: declaring cee986a2-a54c at ssl://10.1.9.71:4567 stable
2024-09-12 03:36:12.104 INFO: declaring ea92447e-b4de at ssl://10.1.106.165:4567 stable
2024-09-12 03:36:12.105 INFO: PC protocol upgrade 0 → 1
2024-09-12 03:36:12.105 INFO: Node af8cc657-b8d9 state primary
2024-09-12 03:36:12.106 INFO: Current view of cluster as seen by this node
view (view_id(PRIM,27cfeea8-badb,88)
memb {
27cfeea8-badb,0
af8cc657-b8d9,0
cee986a2-a54c,0
ea92447e-b4de,0
}
joined {
}
left {
}
partitioned {
}
)
2024-09-12 03:36:12.106 INFO: Save the discovered primary-component to disk
2024-09-12 03:36:12.106 WARN: open file(./gvwstate.dat.tmp) failed(Permission denied)
2024-09-12 03:36:12.602 INFO: gcomm: connected
2024-09-12 03:36:12.602 INFO: Changing maximum packet size to 64500, resulting msg size: 32636
2024-09-12 03:36:12.602 INFO: Shifting CLOSED → OPEN (TO: 0)
2024-09-12 03:36:12.602 INFO: Opened channel ‘cluster-pxc’
2024-09-12 03:36:12.603 INFO: New COMPONENT: primary = yes, bootstrap = no, my_idx = 0, memb_num = 4
2024-09-12 03:36:12.603 INFO: STATE_EXCHANGE: sent state UUID: 2868e45b-70b8-11ef-b05f-8bbdc0af14f8
2024-09-12 03:36:12.604 INFO: STATE EXCHANGE: sent state msg: 2868e45b-70b8-11ef-b05f-8bbdc0af14f8
2024-09-12 03:36:12.605 INFO: STATE EXCHANGE: got state msg: 2868e45b-70b8-11ef-b05f-8bbdc0af14f8 from 0 (garb)
2024-09-12 03:36:12.605 INFO: STATE EXCHANGE: got state msg: 2868e45b-70b8-11ef-b05f-8bbdc0af14f8 from 1 (cluster-pxc-2)
2024-09-12 03:36:12.605 INFO: STATE EXCHANGE: got state msg: 2868e45b-70b8-11ef-b05f-8bbdc0af14f8 from 2 (cluster-pxc-1)
2024-09-12 03:36:12.605 INFO: STATE EXCHANGE: got state msg: 2868e45b-70b8-11ef-b05f-8bbdc0af14f8 from 3 (cluster-pxc-0)
2024-09-12 03:36:12.605 INFO: Quorum results:
version = 6,
component = PRIMARY,
conf_id = 87,
members = 3/4 (primary/total),
act_id = 2345,
last_appl. = 2325,
protocols = 4/11/4 (gcs/repl/appl),
vote policy= 0,
group UUID = f92fb1be-6ffe-11ef-9a87-0ac4a615d14f
2024-09-12 03:36:12.605 FATAL: /mnt/jenkins/workspace/pxc80-autobuild-RELEASE/test/rpmbuild/BUILD/Percona-XtraDB-Cluster-8.0.36/percona-xtradb-cluster-galera/gcs/src/gcs_group.cpp:group_check_proto_ver():341: Group requested gcs_proto_ver: 4, max supported by this node: 2.Upgrade the node before joining this group.Need to abort.
2024-09-12 03:36:12.605 INFO: garbd: Terminated.
/usr/bin/backup.sh: line 68: 368 Aborted garbd --address “gcomm://$NODE_NAME.$PXC_SERVICE?gmcast.listen_addr=tcp://0.0.0.0:4567” --donor “$NODE_NAME” --group “$PXC_SERVICE” --options “$GARBD_OPTS” --sst “xtrabackup-v2:$LOCAL_IP:4444/xtrabackup_sst//1” --recv-script=“/usr/bin/run_backup.sh”

  • EXID_CODE=134
  • ‘[’ -f /tmp/backup-is-completed ‘]’
  • log ERROR ‘Backup was finished unsuccessfull’
    2024-09-12 03:36:12 [ERROR] Backup was finished unsuccessfull
  • exit 134

Expected Result:

NAME CLUSTER STORAGE DESTINATION STATUS COMPLETED AGE
cron-cluster-minio-proen-20249120048-372f8 cluster minio-proen s3://backup/cluster-2024-09-12-00:00:48-full Completed 3h47m
backup-minio cluster minio-proen s3://backup/cluster-2024-09-12-03:34:00-full Completed 14m

Actual Result:

NAME CLUSTER STORAGE DESTINATION STATUS COMPLETED AGE
cron-cluster-minio-proen-20249120048-372f8 cluster minio-proen s3://backup/cluster-2024-09-12-00:00:48-full Failed 3h47m
backup-minio cluster minio-proen s3://backup/cluster-2024-09-12-03:34:00-full Failed 14m

Hi @Pratak_Eak , please do not use images with main tag. The latest PXCO release is 1.15.0 percona-xtradb-cluster-operator/deploy/cr.yaml at v1.15.0 · percona/percona-xtradb-cluster-operator · GitHub
You need to use this ver. Images with main tag based on main branch and can be broken during the development cycle.

1 Like

Hi @Slava_Sarzhan
all pod start successfully but pcx status still error. I custom persistence storage to specific storageclass

kubectl -n pxc describe pxc

Status:
Backup:
Conditions:
Last Transition Time: 2024-09-15T05:27:21Z
Status: True
Type: initializing
Last Transition Time: 2024-09-15T05:29:27Z
Message: requested storage (187Gi) is less than actual storage (200Gi)
Reason: ErrorReconcile
Status: True
Type: error
Last Transition Time: 2024-09-15T05:41:10Z
Status: True
Type: initializing
Last Transition Time: 2024-09-15T05:41:11Z
Message: requested storage (187Gi) is less than actual storage (200Gi)
Reason: ErrorReconcile
Status: True
Type: error
Haproxy:
Label Selector Path: app.kubernetes.io/component=haproxy,app.kubernetes.io/instance=cluster,app.kubernetes.io/managed-by=percona-xtradb-cluster-operator,app.kubernetes.io/name=percona-xtradb-cluster,app.kubernetes.io/part-of=percona-xtradb-cluster
Ready: 3
Size: 3
Status: ready
Host: cluster-haproxy.pxc
Logcollector:
Message:
Error: requested storage (187Gi) is less than actual storage (200Gi)
Observed Generation: 2

here is cr.yaml

apiVersion: pxc.percona.com/v1
kind: PerconaXtraDBCluster
metadata:
  name: cluster
  finalizers:
    - percona.com/delete-pxc-pods-in-order
#    - percona.com/delete-ssl
#    - percona.com/delete-proxysql-pvc
#    - percona.com/delete-pxc-pvc
#  annotations:
#    percona.com/issue-vault-token: "true"
spec:
  crVersion: 1.15.0
#  ignoreAnnotations:
#    - iam.amazonaws.com/role
#  ignoreLabels:
#    - rack
#  secretsName: cluster1-secrets
#  vaultSecretName: keyring-secret-vault
#  sslSecretName: cluster1-ssl
#  sslInternalSecretName: cluster1-ssl-internal
#  logCollectorSecretName: cluster1-log-collector-secrets
#  initContainer:
#    image: percona/percona-xtradb-cluster-operator:1.15.0
#    resources:
#      requests:
#        memory: 100M
#        cpu: 100m
#      limits:
#        memory: 200M
#        cpu: 200m
#  enableCRValidationWebhook: true
  tls:
    enabled: true
#    SANs:
#      - pxc-1.example.com
#      - pxc-2.example.com
#      - pxc-3.example.com
#    issuerConf:
#      name: special-selfsigned-issuer
#      kind: ClusterIssuer
#      group: cert-manager.io
  unsafeFlags:
#    tls: false
#    pxcSize: false
#    proxySize: false
    backupIfUnhealthy: true
  # pause: true
  updateStrategy: SmartUpdate
  upgradeOptions:
    versionServiceEndpoint: https://check.percona.com
    apply: disabled
    schedule: "0 4 * * *"
  pxc:
    size: 3
    image: percona/percona-xtradb-cluster:8.0.36-28.1
    autoRecovery: true
#    expose:
#      enabled: true
#      type: LoadBalancer
#      externalTrafficPolicy: Local
#      internalTrafficPolicy: Local
#      loadBalancerSourceRanges:
#        - 10.0.0.0/8
#      loadBalancerIP: 127.0.0.1
#      annotations:
#        networking.gke.io/load-balancer-type: "Internal"
#      labels:
#        rack: rack-22
#    replicationChannels:
#    - name: pxc1_to_pxc2
#      isSource: true
#    - name: pxc2_to_pxc1
#      isSource: false
#      configuration:
#        sourceRetryCount: 3
#        sourceConnectRetry: 60
#        ssl: false
#        sslSkipVerify: true
#        ca: '/etc/mysql/ssl/ca.crt'
#      sourcesList:
#      - host: 10.95.251.101
#        port: 3306
#        weight: 100
#    schedulerName: mycustom-scheduler
#    readinessDelaySec: 15
#    livenessDelaySec: 600
    configuration: |
      [mysqld]
      wsrep_debug=CLIENT
      wsrep_provider_options="gcache.size=1G; gcache.recover=yes"
      max_connections = 500
      interactive_timeout = 300
      wait_timeout = 300
      skip_name_resolve
      [sst]
      xbstream-opts=--decompress
      [xtrabackup]
      compress=lz4
      # for PXC 5.7
      # [xtrabackup]
      # compress
#    imagePullSecrets:
#      - name: private-registry-credentials
#    priorityClassName: high-priority
#    annotations:
#      iam.amazonaws.com/role: role-arn
#    labels:
#      rack: rack-22
#    readinessProbes:
#      initialDelaySeconds: 15
#      timeoutSeconds: 15
#      periodSeconds: 30
#      successThreshold: 1
#      failureThreshold: 5
#    livenessProbes:
#      initialDelaySeconds: 300
#      timeoutSeconds: 5
#      periodSeconds: 10
#      successThreshold: 1
#      failureThreshold: 3
#    containerSecurityContext:
#      privileged: false
#    podSecurityContext:
#      runAsUser: 1001
#      runAsGroup: 1001
#      supplementalGroups: [1001]
#    serviceAccountName: percona-xtradb-cluster-operator-workload
#    imagePullPolicy: Always
#    runtimeClassName: image-rc
#    sidecars:
#    - image: busybox
#      command: ["/bin/sh"]
#      args: ["-c", "while true; do trap 'exit 0' SIGINT SIGTERM SIGQUIT SIGKILL; done;"]
#      name: my-sidecar-1
#      resources:
#        requests:
#          memory: 100M
#          cpu: 100m
#        limits:
#          memory: 200M
#          cpu: 200m
#    envVarsSecret: my-env-var-secrets
    resources:
      requests:
        memory: 8G
        cpu: 1
#        ephemeral-storage: 1G
#      limits:
#        memory: 1G
#        cpu: "1"
#        ephemeral-storage: 1G
#    nodeSelector:
#      disktype: ssd
#    topologySpreadConstraints:
#    - labelSelector:
#        matchLabels:
#          app.kubernetes.io/name: percona-xtradb-cluster
#      maxSkew: 1
#      topologyKey: kubernetes.io/hostname
#      whenUnsatisfiable: DoNotSchedule
    affinity:
      antiAffinityTopologyKey: "kubernetes.io/hostname"
#      advanced:
#        nodeAffinity:
#          requiredDuringSchedulingIgnoredDuringExecution:
#            nodeSelectorTerms:
#            - matchExpressions:
#              - key: kubernetes.io/e2e-az-name
#                operator: In
#                values:
#                - e2e-az1
#                - e2e-az2
#    tolerations:
#    - key: "node.alpha.kubernetes.io/unreachable"
#      operator: "Exists"
#      effect: "NoExecute"
#      tolerationSeconds: 6000
    podDisruptionBudget:
      maxUnavailable: 1
#      minAvailable: 0
    volumeSpec:
#      emptyDir: {}
#      hostPath:
#        path: /data
#        type: Directory
      persistentVolumeClaim:
        storageClassName: pxc
        accessModes: [ "ReadWriteOnce" ]
#        dataSource:
#          name: new-snapshot-test
#          kind: VolumeSnapshot
#          apiGroup: snapshot.storage.k8s.io
        resources:
          requests:
            storage: 200G
    gracePeriod: 600
#    lifecycle:
#      preStop:
#        exec:
#          command: [ "/bin/true" ]
#      postStart:
#        exec:
#          command: [ "/bin/true" ]
  haproxy:
    enabled: true
    size: 3
    image: percona/haproxy:2.8.5
#    imagePullPolicy: Always
#    schedulerName: mycustom-scheduler
#    readinessDelaySec: 15
#    livenessDelaySec: 600
#    configuration: |
#
#    the actual default configuration file can be found here https://raw.githubusercontent.com/percona/percona-xtradb-cluster-operator/main/build/haproxy-global.cfg
#
#      global
#        maxconn 2048
#        external-check
#        insecure-fork-wanted
#        stats socket /etc/haproxy/pxc/haproxy.sock mode 600 expose-fd listeners level admin
#
#      defaults
#        default-server init-addr last,libc,none
#        log global
#        mode tcp
#        retries 10
#        timeout client 28800s
#        timeout connect 100500
#        timeout server 28800s
#
#      resolvers kubernetes
#        parse-resolv-conf
#
#      frontend galera-in
#        bind *:3309 accept-proxy
#        bind *:3306
#        mode tcp
#        option clitcpka
#        default_backend galera-nodes
#
#      frontend galera-admin-in
#        bind *:33062
#        mode tcp
#        option clitcpka
#        default_backend galera-admin-nodes
#
#      frontend galera-replica-in
#        bind *:3307
#        mode tcp
#        option clitcpka
#        default_backend galera-replica-nodes
#
#      frontend galera-mysqlx-in
#        bind *:33060
#        mode tcp
#        option clitcpka
#        default_backend galera-mysqlx-nodes
#
#      frontend stats
#        bind *:8404
#        mode http
#        http-request use-service prometheus-exporter if { path /metrics }
#    imagePullSecrets:
#      - name: private-registry-credentials
#    annotations:
#      iam.amazonaws.com/role: role-arn
#    labels:
#      rack: rack-22
#    readinessProbes:
#      initialDelaySeconds: 15
#      timeoutSeconds: 1
#      periodSeconds: 5
#      successThreshold: 1
#      failureThreshold: 3
#    livenessProbes:
#      initialDelaySeconds: 60
#      timeoutSeconds: 5
#      periodSeconds: 30
#      successThreshold: 1
#      failureThreshold: 4
#    exposePrimary:
#      enabled: false
#      type: ClusterIP
#      annotations:
#        service.beta.kubernetes.io/aws-load-balancer-backend-protocol: tcp
#      externalTrafficPolicy: Cluster
#      internalTrafficPolicy: Cluster
#      labels:
#        rack: rack-22
#      loadBalancerSourceRanges:
#        - 10.0.0.0/8
#      loadBalancerIP: 127.0.0.1
#    exposeReplicas:
#      enabled: true
#      onlyReaders: false
#      type: ClusterIP
#      annotations:
#        service.beta.kubernetes.io/aws-load-balancer-backend-protocol: tcp
#      externalTrafficPolicy: Cluster
#      internalTrafficPolicy: Cluster
#      labels:
#        rack: rack-22
#      loadBalancerSourceRanges:
#        - 10.0.0.0/8
#      loadBalancerIP: 127.0.0.1
#    runtimeClassName: image-rc
#    sidecars:
#    - image: busybox
#      command: ["/bin/sh"]
#      args: ["-c", "while true; do trap 'exit 0' SIGINT SIGTERM SIGQUIT SIGKILL; done;"]
#      name: my-sidecar-1
#      resources:
#        requests:
#          memory: 100M
#          cpu: 100m
#        limits:
#          memory: 200M
#          cpu: 200m
#    envVarsSecret: my-env-var-secrets
    resources:
      requests:
        memory: 1G
        cpu: 600m
#      limits:
#        memory: 1G
#        cpu: 700m
#    priorityClassName: high-priority
#    nodeSelector:
#      disktype: ssd
#    sidecarResources:
#      requests:
#        memory: 1G
#        cpu: 500m
#      limits:
#        memory: 2G
#        cpu: 600m
#    containerSecurityContext:
#      privileged: false
#    podSecurityContext:
#      runAsUser: 1001
#      runAsGroup: 1001
#      supplementalGroups: [1001]
#    serviceAccountName: percona-xtradb-cluster-operator-workload
#    topologySpreadConstraints:
#    - labelSelector:
#        matchLabels:
#          app.kubernetes.io/name: percona-xtradb-cluster
#      maxSkew: 1
#      topologyKey: kubernetes.io/hostname
#      whenUnsatisfiable: DoNotSchedule
    affinity:
      antiAffinityTopologyKey: "kubernetes.io/hostname"
#      advanced:
#        nodeAffinity:
#          requiredDuringSchedulingIgnoredDuringExecution:
#            nodeSelectorTerms:
#            - matchExpressions:
#              - key: kubernetes.io/e2e-az-name
#                operator: In
#                values:
#                - e2e-az1
#                - e2e-az2
#    tolerations:
#    - key: "node.alpha.kubernetes.io/unreachable"
#      operator: "Exists"
#      effect: "NoExecute"
#      tolerationSeconds: 6000
    podDisruptionBudget:
      maxUnavailable: 1
#      minAvailable: 0
    gracePeriod: 30
#    lifecycle:
#      preStop:
#        exec:
#          command: [ "/bin/true" ]
#      postStart:
#        exec:
#          command: [ "/bin/true" ]
  proxysql:
    enabled: false
    size: 3
    image: percona/proxysql2:2.5.5
#    imagePullPolicy: Always
#    configuration: |
#      datadir="/var/lib/proxysql"
#
#      admin_variables =
#      {
#        admin_credentials="proxyadmin:admin_password"
#        mysql_ifaces="0.0.0.0:6032"
#        refresh_interval=2000
#
#        cluster_username="proxyadmin"
#        cluster_password="admin_password"
#        checksum_admin_variables=false
#        checksum_ldap_variables=false
#        checksum_mysql_variables=false
#        cluster_check_interval_ms=200
#        cluster_check_status_frequency=100
#        cluster_mysql_query_rules_save_to_disk=true
#        cluster_mysql_servers_save_to_disk=true
#        cluster_mysql_users_save_to_disk=true
#        cluster_proxysql_servers_save_to_disk=true
#        cluster_mysql_query_rules_diffs_before_sync=1
#        cluster_mysql_servers_diffs_before_sync=1
#        cluster_mysql_users_diffs_before_sync=1
#        cluster_proxysql_servers_diffs_before_sync=1
#      }
#
#      mysql_variables=
#      {
#        monitor_password="monitor"
#        monitor_galera_healthcheck_interval=1000
#        threads=2
#        max_connections=2048
#        default_query_delay=0
#        default_query_timeout=10000
#        poll_timeout=2000
#        interfaces="0.0.0.0:3306"
#        default_schema="information_schema"
#        stacksize=1048576
#        connect_timeout_server=10000
#        monitor_history=60000
#        monitor_connect_interval=20000
#        monitor_ping_interval=10000
#        ping_timeout_server=200
#        commands_stats=true
#        sessions_sort=true
#        have_ssl=true
#        ssl_p2s_ca="/etc/proxysql/ssl-internal/ca.crt"
#        ssl_p2s_cert="/etc/proxysql/ssl-internal/tls.crt"
#        ssl_p2s_key="/etc/proxysql/ssl-internal/tls.key"
#        ssl_p2s_cipher="ECDHE-RSA-AES128-GCM-SHA256"
#      }
#    readinessDelaySec: 15
#    livenessDelaySec: 600
#    schedulerName: mycustom-scheduler
#    imagePullSecrets:
#      - name: private-registry-credentials
#    annotations:
#      iam.amazonaws.com/role: role-arn
#    labels:
#      rack: rack-22
#    expose:
#      enabled: false
#      type: ClusterIP
#      annotations:
#        service.beta.kubernetes.io/aws-load-balancer-backend-protocol: tcp
#      externalTrafficPolicy: Cluster
#      internalTrafficPolicy: Cluster
#      labels:
#        rack: rack-22
#      loadBalancerSourceRanges:
#        - 10.0.0.0/8
#      loadBalancerIP: 127.0.0.1
#    runtimeClassName: image-rc
#    sidecars:
#    - image: busybox
#      command: ["/bin/sh"]
#      args: ["-c", "while true; do trap 'exit 0' SIGINT SIGTERM SIGQUIT SIGKILL; done;"]
#      name: my-sidecar-1
#      resources:
#        requests:
#          memory: 100M
#          cpu: 100m
#        limits:
#          memory: 200M
#          cpu: 200m
#    envVarsSecret: my-env-var-secrets
    resources:
      requests:
        memory: 1G
        cpu: 600m
#      limits:
#        memory: 1G
#        cpu: 700m
#    priorityClassName: high-priority
#    nodeSelector:
#      disktype: ssd
#    sidecarResources:
#      requests:
#        memory: 1G
#        cpu: 500m
#      limits:
#        memory: 2G
#        cpu: 600m
#    containerSecurityContext:
#      privileged: false
#    podSecurityContext:
#      runAsUser: 1001
#      runAsGroup: 1001
#      supplementalGroups: [1001]
#    serviceAccountName: percona-xtradb-cluster-operator-workload
#    topologySpreadConstraints:
#    - labelSelector:
#        matchLabels:
#          app.kubernetes.io/name: percona-xtradb-cluster-operator
#      maxSkew: 1
#      topologyKey: kubernetes.io/hostname
#      whenUnsatisfiable: DoNotSchedule
    affinity:
      antiAffinityTopologyKey: "kubernetes.io/hostname"
#      advanced:
#        nodeAffinity:
#          requiredDuringSchedulingIgnoredDuringExecution:
#            nodeSelectorTerms:
#            - matchExpressions:
#              - key: kubernetes.io/e2e-az-name
#                operator: In
#                values:
#                - e2e-az1
#                - e2e-az2
#    tolerations:
#    - key: "node.alpha.kubernetes.io/unreachable"
#      operator: "Exists"
#      effect: "NoExecute"
#      tolerationSeconds: 6000
    volumeSpec:
#      emptyDir: {}
#      hostPath:
#        path: /data
#        type: Directory
      persistentVolumeClaim:
#        storageClassName: standard
#        accessModes: [ "ReadWriteOnce" ]
        resources:
          requests:
            storage: 2G
    podDisruptionBudget:
      maxUnavailable: 1
#      minAvailable: 0
    gracePeriod: 30
#    lifecycle:
#      preStop:
#        exec:
#          command: [ "/bin/true" ]
#      postStart:
#        exec:
#          command: [ "/bin/true" ]
#   loadBalancerSourceRanges:
#     - 10.0.0.0/8
  logcollector:
    enabled: true
    image: percona/percona-xtradb-cluster-operator:1.15.0-logcollector-fluentbit3.1.4
#    configuration: |
#      [OUTPUT]
#           Name  es
#           Match *
#           Host  192.168.2.3
#           Port  9200
#           Index my_index
#           Type  my_type
    resources:
      requests:
        memory: 100M
        cpu: 200m
  pmm:
    enabled: false
    image: percona/pmm-client:2.42.0
    serverHost: monitoring-service
#    serverUser: admin
#    pxcParams: "--disable-tablestats-limit=2000"
#    proxysqlParams: "--custom-labels=CUSTOM-LABELS"
#    containerSecurityContext:
#      privileged: false
    resources:
      requests:
        memory: 150M
        cpu: 300m
  backup:
    allowParallel: true 
    image: percona/percona-xtradb-cluster-operator:1.15.0-pxc8.0-backup-pxb8.0.35
    backoffLimit: 3
#    serviceAccountName: percona-xtradb-cluster-operator
#    imagePullSecrets:
#      - name: private-registry-credentials
    pitr:
      enabled: false
      storageName: fs-pvc
      timeBetweenUploads: 60
      timeoutSeconds: 60
#      resources:
#        requests:
#          memory: 0.1G
#          cpu: 100m
#        limits:
#          memory: 1G
#          cpu: 700m
    storages:
      s3-us-west:
        type: s3
        verifyTLS: true
#        nodeSelector:
#          storage: tape
#          backupWorker: 'True'
#        resources:
#          requests:
#            memory: 1G
#            cpu: 600m
#        topologySpreadConstraints:
#        - labelSelector:
#            matchLabels:
#              app.kubernetes.io/name: percona-xtradb-cluster
#          maxSkew: 1
#          topologyKey: kubernetes.io/hostname
#          whenUnsatisfiable: DoNotSchedule
#        affinity:
#          nodeAffinity:
#            requiredDuringSchedulingIgnoredDuringExecution:
#              nodeSelectorTerms:
#              - matchExpressions:
#                - key: backupWorker
#                  operator: In
#                  values:
#                  - 'True'
#        tolerations:
#          - key: "backupWorker"
#            operator: "Equal"
#            value: "True"
#            effect: "NoSchedule"
#        annotations:
#          testName: scheduled-backup
#        labels:
#          backupWorker: 'True'
#        schedulerName: 'default-scheduler'
#        priorityClassName: 'high-priority'
#        containerSecurityContext:
#          privileged: true
#        podSecurityContext:
#          fsGroup: 1001
#          supplementalGroups: [1001, 1002, 1003]
#        containerOptions:
#          env:
#          - name: VERIFY_TLS
#            value: "false"
#          args:
#            xtrabackup:
#            - "--someflag=abc"
#            xbcloud:
#            - "--someflag=abc"
#            xbstream:
#            - "--someflag=abc"
        s3:
          bucket: S3-BACKUP-BUCKET-NAME-HERE
          credentialsSecret: my-cluster-name-backup-s3
          region: us-west-2
      azure-blob:
        type: azure
        azure:
          credentialsSecret: azure-secret
          container: test
#          endpointUrl: https://accountName.blob.core.windows.net
#          storageClass: Hot
      fs-pvc:
        type: filesystem
#        nodeSelector:
#          storage: tape
#          backupWorker: 'True'
#        resources:
#          requests:
#            memory: 1G
#            cpu: 600m
#        topologySpreadConstraints:
#        - labelSelector:
#            matchLabels:
#              app.kubernetes.io/name: percona-xtradb-cluster
#          maxSkew: 1
#          topologyKey: kubernetes.io/hostname
#          whenUnsatisfiable: DoNotSchedule
#        affinity:
#          nodeAffinity:
#            requiredDuringSchedulingIgnoredDuringExecution:
#              nodeSelectorTerms:
#              - matchExpressions:
#                - key: backupWorker
#                  operator: In
#                  values:
#                  - 'True'
#        tolerations:
#          - key: "backupWorker"
#            operator: "Equal"
#            value: "True"
#            effect: "NoSchedule"
#        annotations:
#          testName: scheduled-backup
#        labels:
#          backupWorker: 'True'
#        schedulerName: 'default-scheduler'
#        priorityClassName: 'high-priority'
#        containerSecurityContext:
#          privileged: true
#        podSecurityContext:
#          fsGroup: 1001
#          supplementalGroups: [1001, 1002, 1003]
        volume:
          persistentVolumeClaim:
            storageClassName: pxc-backup
            accessModes: [ "ReadWriteOnce" ]
            resources:
              requests:
                storage: 200G
    schedule:
#      - name: "sat-night-backup"
#        schedule: "0 0 * * 6"
#        keep: 3
#        storageName: s3-us-west
      - name: "daily-backup"
        schedule: "0 0 * * *"
        keep: 5
        storageName: fs-pvc

I fix in cr.yaml add “Gi” in storage size.

      persistentVolumeClaim:
#        storageClassName: standard
#        accessModes: [ "ReadWriteOnce" ]
#        dataSource:
#          name: new-snapshot-test
#          kind: VolumeSnapshot
#          apiGroup: snapshot.storage.k8s.io
        resources:
          requests:
--            storage: 200G
++          storage: 200Gi

@Pratak_Eak, As I understand, the cluster is okay, but you still have problems with backups. Can you send the log from the backup pod? I need to know your k8s version and the platform that you are using.

Hello, same issue for me
FATAL: /mnt/jenkins/workspace/pxc80-autobuild-RELEASE/test/rpmbuild/BUILD/Percona-XtraDB-Cluster-8.0.36/percona-xtradb-cluster-galera/gcs/src/gcs_group.cpp:group_check_proto_ver():341: Group requested gcs_proto_ver: 4, max supported by this node: 2.Upgrade the node before joining this group.Need to abort.
operator : percona/percona-xtradb-cluster-operator:1.15.0
pxc : percona/percona-xtradb-cluster:8.0
backup : percona/percona-xtradb-cluster-operator:1.15.0-pxc8.0-backup-pxb8.0.35

of course no issue with 1.14

@Frederic_Coelho1 the issue is different. More detailed information can be found backup mysql 8.0 fail on 1.15 · Issue #1832 · percona/percona-xtradb-cluster-operator · GitHub