I cannot take backup via percona mysql operator

Description:

[Detailed description of the issue or question]
I’m using the Percona Operator for MySQL based on Percona XtraDB Cluster. The scheduled backup was running until November 8th. It’s been getting errors since November 8th.

The Yaml is below:
apiVersion: pxc.percona.com/v1
kind: PerconaXtraDBCluster
metadata:
name: cluster1
namespace: s-devops
spec:
proxysql:
affinity:
antiAffinityTopologyKey: kubernetes.io/hostname
enabled: false
gracePeriod: 30
image: >-
registry.connect.redhat.com/percona/percona-xtradb-cluster-operator-containers@sha256:df845ab993fd1229567f62338c69e27ce65d1aeec66455f576f55ecce50eda9f
podDisruptionBudget:
maxUnavailable: 1
resources:
requests:
cpu: 600m
memory: 1G
size: 3
volumeSpec:
persistentVolumeClaim:
resources:
requests:
storage: 2G
upgradeOptions:
apply: disabled
schedule: 0 4 * * *
versionServiceEndpoint: ‘https://check.percona.com
backup:
image: >-
registry.connect.redhat.com/percona/percona-xtradb-cluster-operator-containers@sha256:1687127a1f86ff48b2623f6be7176ff693631e870f3920d137d5fba31c923a68
pitr:
enabled: false
storageName: STORAGE-NAME-HERE
timeBetweenUploads: 60
schedule:
- keep: 365
name: daily-s3-backup
schedule: 0 1 * * *
storageName: s3-us-west
storages:
azure-blob:
azure:
container: test
credentialsSecret: azure-secret
type: azure
fs-pvc:
type: filesystem
volume:
persistentVolumeClaim:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10G
s3-us-west:
s3:
bucket: shbdn-devops-percona-mysqlbackup-daily
credentialsSecret: cluster1-s3-credentials
endpointUrl: ‘https://storage.googleapis.com/
region: europe-west1
type: s3
crVersion: 1.13.0
logcollector:
enabled: true
image: >-
registry.connect.redhat.com/percona/percona-xtradb-cluster-operator-containers@sha256:4465f029d19fb9c86d0aaa64e1b8a1695efd2665ae2b4820546434d57f433a65
resources:
requests:
cpu: 200m
memory: 100M
allowUnsafeConfigurations: false
pmm:
enabled: false
image: >-
registry.connect.redhat.com/percona/percona-xtradb-cluster-operator-containers@sha256:a18f0c877c6f01408dc3a25caddc0a54c73c21bbd931942bd775c45e67a03876
resources:
requests:
cpu: 300m
memory: 150M
serverHost: monitoring-service
haproxy:
affinity:
antiAffinityTopologyKey: kubernetes.io/hostname
enabled: true
gracePeriod: 30
image: >-
registry.connect.redhat.com/percona/percona-xtradb-cluster-operator-containers@sha256:c36bab0d21d0530cdad2d53f111bda75022bec6d6adc5fbd63238462ada43c75
podDisruptionBudget:
maxUnavailable: 1
resources:
requests:
cpu: 600m
memory: 1G
size: 3
pxc:
size: 3
autoRecovery: true
expose:
enabled: true
type: NodePort
resources:
requests:
cpu: 600m
memory: 1G
affinity:
antiAffinityTopologyKey: kubernetes.io/hostname
volumeSpec:
persistentVolumeClaim:
resources:
requests:
storage: 6G
gracePeriod: 600
image: >-
registry.connect.redhat.com/percona/percona-xtradb-cluster-operator-containers@sha256:e33af88c3b48aef890af9e77901234d9458d9d1d7666b7d41c86a686f6ad7ccb
replicationChannels:
- isSource: true
name: cluster1_to_cluster2
podDisruptionBudget:
maxUnavailable: 1
updateStrategy: SmartUpdate

Version:

[Insert the version number of the software]
1.13.0

Logs:

[If applicable, include any relevant log files or error messages]
The donor pod that is cluster1-pxc-2 has different errors from the log:
{“log”:“2023-11-09T04:18:03.153897Z 0 [Note] [MY-000000] [WSREP-SST] donor: => Rate:[7.66KiB/s] Avg:[7.66KiB/s] Elapsed:0:02:00 ETA 4:36:38\r donor: => Rate:[7.66KiB/s] Avg:[7.66KiB/s] Elapsed:0:02:00 \n”,“file”:“/var/lib/mysql/mysqld-error.log”}
{“log”:“2023-11-09T04:18:02.764020-00:00 1 [Note] [MY-011825] [Xtrabackup] >> log scanned up to (301571343)\n”,“file”:“/var/lib/mysql/innobackup.backup.log”}
{“log”:“xtrabackup: Error writing file ‘’ (OS errno 32 - Broken pipe)\n”,“file”:“/var/lib/mysql/innobackup.backup.log”}
{“log”:“2023-11-09T04:18:03.154323-00:00 4 [ERROR] [MY-011825] [Xtrabackup] failed to copy datafile ./dummy_test/REPLICATEST01.ibd\n”,“file”:“/var/lib/mysql/innobackup.backup.log”}
{“log”:“2023-11-09T04:18:03.982754Z 0 [ERROR] [MY-000000] [WSREP-SST] ******************* FATAL ERROR ********************** \n”,“file”:“/var/lib/mysql/mysqld-error.log”}
{“log”:“2023-11-09T04:18:03.982827Z 0 [ERROR] [MY-000000] [WSREP-SST] xtrabackup finished with error: 1. Check /var/lib/mysql//innobackup.backup.log\n”,“file”:“/var/lib/mysql/mysqld-error.log”}
{“log”:“2023-11-09T04:18:03.982840Z 0 [ERROR] [MY-000000] [WSREP-SST] Line 2061\n”,“file”:“/var/lib/mysql/mysqld-error.log”}
{“log”:“2023-11-09T04:18:03.987287Z 0 [ERROR] [MY-000000] [WSREP-SST] ------------ i
{“log”:“2023-11-09T04:18:04.005574Z 0 [ERROR] [MY-000000] [WSREP] Process completed with error: wsrep_sst_xtrabackup-v2 --role ‘donor’ --address ‘10.130.28.50:4444/xtrabackup_sst//1’ --socket ‘/tmp/mysql.sock’ --datadir ‘/var/lib/mysql/’ --basedir ‘/usr/’ --plugindir ‘/usr/lib64/mysql/plugin/’ --defaults-file ‘/etc/my.cnf’ --defaults-group-suffix ‘’ --mysqld-version ‘8.0.32-24.2’ --binlog ‘binlog’ --gtid ‘042e9eaf-383b-11ee-ad42-131a862e7064:120350’ : 22 (Invalid argument)\n”,“file”:”/var/lib/mysql/mysqld-error.log"}
{“log”:“2023-11-09T04:18:04.007088Z 0 [Note] [MY-000000] [Galera] SST sending failed: -22\n”,“file”:“/var/lib/mysql/mysqld-error.log”}
{“log”:“2023-11-09T04:18:04.007154Z 0 [Note] [MY-000000] [WSREP] Server status change donor → joined\n”,“file”:“/var/lib/mysql/mysqld-error.log”}
{“log”:“2023-11-09T04:18:04.007285Z 0 [Note] [MY-000000] [WSREP] wsrep_notify_cmd is not defined, skipping notification.\n”,“file”:“/var/lib/mysql/mysqld-error.log”}
{“log”:“2023-11-09T04:18:04.007367Z 0 [ERROR] [MY-000000] [WSREP] Command did not run: wsrep_sst_xtrabackup-v2 --role ‘donor’ --address ‘10.130.28.50:4444/xtrabackup_sst//1’ --socket ‘/tmp/mysql.sock’ --datadir ‘/var/lib/mysql/’ --basedir ‘/usr/’ --plugindir ‘/usr/lib64/mysql/plugin/’ --defaults-file ‘/etc/my.cnf’ --defaults-group-suffix ‘’ --mysqld-version ‘8.0.32-24.2’ --binlog ‘binlog’ --gtid ‘042e9eaf-383b-11ee-ad42-131a862e7064:120350’ \n”,“file”:“/var/lib/mysql/mysqld-error.log”}
{“log”:“2023-11-09T04:18:04.008396Z 0 [Warning] [MY-000000] [Galera] 2.0 (cluster1-pxc-2): State transfer to 3.0 (garb) failed: -22 (Invalid argument)\n”,“file”:“/var/lib/mysql/mysqld-error.log”}
{“log”:“2023-11-09T04:18:04.008454Z 0 [Note] [MY-000000] [Galera] Shifting DONOR/DESYNCED → JOINED (TO: 120350)\n”,“file”:“/var/lib/mysql/mysqld-error.log”}
{“log”:“2023-11-09T04:18:04.008539Z 0 [Note] [MY-000000] [Galera] Processing event queue:… -nan% (0/0 events) complete.\n”,“file”:“/var/lib/mysql/mysqld-error.log”}
{“log”:“2023-11-09T04:18:03.764319-00:00 1 [Note] [MY-011825] [Xtrabackup] >> log scanned up to (301571343)\n”,“file”:“/var/lib/mysql/innobackup.backup.log”}
{“log”:“2023-11-09T04:18:05.009654Z 0 [Note] [MY-000000] [Galera] Member 2.0 (cluster1-pxc-2) synced with group.\n”,“file”:“/var/lib/mysql/mysqld-error.log”}
{“log”:“2023-11-09T04:18:05.009663Z 0 [Note] [MY-000000] [Galera] declaring 33d87cf9-a8bc at ssl://10.130.38.222:4567 stable\n”,“file”:“/var/lib/mysql/mysqld-error.log”}
{“log”:“2023-11-09T04:18:05.009753Z 0 [Note] [MY-000000] [Galera] Processing event queue:…100.0% (1/1 events) complete.\n”,“file”:“/var/lib/mysql/mysqld-error.log”}
{“log”:“2023-11-09T04:18:05.009768Z 0 [Note] [MY-000000] [Galera] Shifting JOINED → SYNCED (TO: 120350)\n”,“file”:“/var/lib/mysql/mysqld-error.log”}
{“log”:“2023-11-09T04:18:05.009769Z 0 [Note] [MY-000000] [Galera] declaring 540e7622-9716 at ssl://10.131.39.80:4567 stable\n”,“file”:“/var/lib/mysql/mysqld-error.log”}
{“log”:“2023-11-09T04:18:05.009832Z 10 [Note] [MY-000000] [Galera] Server cluster1-pxc-2 synced with group\n”,“file”:“/var/lib/mysql/mysqld-error.log”}
{“log”:“2023-11-09T04:18:05.009834Z 0 [Note] [MY-000000] [Galera] forgetting a975e53e-9d0e (ssl://10.130.28.50:4567)\n”,“file”:“/var/lib/mysql/mysqld-error.log”}
{“log”:“2023-11-09T04:18:05.009858Z 10 [Note] [MY-000000] [WSREP] Server status change joined → synced\n”,“file”:“/var/lib/mysql/mysqld-error.log”}
{“log”:“2023-11-09T04:18:05.009869Z 10 [Note] [MY-000000] [WSREP] Synchronized with group, ready for connections\n”,“file”:“/var/lib/mysql/mysqld-error.log”}
{“log”:“2023-11-09T04:18:05.009876Z 10 [Note] [MY-000000] [WSREP] wsrep_notify_cmd is not defined, skipping notification.\n”,“file”:“/var/lib/mysql/mysqld-error.log”}
{“log”:“2023-11-09T04:18:05.010578Z 0 [Note] [MY-000000] [Galera] Node 33d87cf9-a8bc state primary\n”,“file”:“/var/lib/mysql/mysqld-error.log”}
{“log”:“2023-11-09T04:18:05.010894Z 0 [Note] [MY-000000] [Galera] Current view of cluster as seen by this node\nview (view_id(PRIM,33d87cf9-a8bc,219)\nmemb {\n\t33d87cf9-a8bc,0\n\t540e7622-9716,0\n\t71030c76-8d12,0\n\t}\njoined {\n\t}\nleft {\n\t}\npartitioned {\n\ta975e53e-9d0e,0\n\t}\n)\n”,“file”:“/var/lib/mysql/mysqld-error.log”}
{“log”:“2023-11-09T04:18:05.010923Z 0 [Note] [MY-000000] [Galera] Save the discovered primary-component to disk\n”,“file”:“/var/lib/mysql/mysqld-error.log”}
{“log”:“2023-11-09T04:18:05.017212Z 0 [Note] [MY-000000] [Galera] forgetting a975e53e-9d0e (ssl://10.130.28.50:4567)\n”,“file”:“/var/lib/mysql/mysqld-error.log”}
{“log”:“2023-11-09T04:18:05.017223Z 0 [Note] [MY-000000] [Galera] New COMPONENT: primary = yes, bootstrap = no, my_idx = 2, memb_num = 3\n”,“file”:“/var/lib/mysql/mysqld-error.log”}
{“log”:“2023-11-09T04:18:05.017302Z 0 [Note] [MY-000000] [Galera] STATE EXCHANGE: Waiting for state UUID.\n”,“file”:“/var/lib/mysql/mysqld-error.log”}
{“log”:“2023-11-09T04:18:05.166038Z 0 [Note] [MY-000000] [Galera] STATE EXCHANGE: sent state msg: fab1cc9e-7eb6-11ee-a4ec-d32fd61b9acd\n”,“file”:“/var/lib/mysql/mysqld-error.log”}
{“log”:“2023-11-09T04:18:05.166535Z 0 [Note] [MY-000000] [Galera] STATE EXCHANGE: got state msg: fab1cc9e-7eb6-11ee-a4ec-d32fd61b9acd from 0 (cluster1-pxc-0)\n”,“file”:“/var/lib/mysql/mysqld-error.log”}
{“log”:“2023-11-09T04:18:05.166574Z 0 [Note] [MY-000000] [Galera] STATE EXCHANGE: got state msg: fab1cc9e-7eb6-11ee-a4ec-d32fd61b9acd from 1 (cluster1-pxc-1)\n”,“file”:“/var/lib/mysql/mysqld-error.log”}
{“log”:“2023-11-09T04:18:05.166597Z 0 [Note] [MY-000000] [Galera] STATE EXCHANGE: got state msg: fab1cc9e-7eb6-11ee-a4ec-d32fd61b9acd from 2 (cluster1-pxc-2)\n”,“file”:“/var/lib/mysql/mysqld-error.log”}
{“log”:“2023-11-09T04:18:05.166637Z 0 [Note] [MY-000000] [Galera] Quorum results:\n\tversion = 6,\n\tcomponent = PRIMARY,\n\tconf_id = 213,\n\tmembers = 3/3 (primary/total),\n\tact_id = 120350,\n\tlast_appl. = 110073,\n\tprotocols = 2/10/4 (gcs/repl/appl),\n\tvote policy= 0,\n\tgroup UUID = 042e9eaf-383b-11ee-ad42-131a862e7064\n”,“file”:“/var/lib/mysql/mysqld-error.log”}
{“log”:“2023-11-09T04:18:05.166730Z 0 [Note] [MY-000000] [Galera] Flow-control interval: [173, 173]\n”,“file”:“/var/lib/mysql/mysqld-error.log”}
{“log”:“2023-11-09T04:18:05.166844Z 2 [Note] [MY-000000] [Galera] ####### processing CC 120351, local, ordered\n”,“file”:“/var/lib/mysql/mysqld-error.log”}
{“log”:“2023-11-09T04:18:05.166895Z 2 [Note] [MY-000000] [Galera] Maybe drain monitors from 120350 upto current CC event 120351 upto:120350\n”,“file”:“/var/lib/mysql/mysqld-error.log”}
{“log”:“2023-11-09T04:18:05.166912Z 2 [Note] [MY-000000] [Galera] Drain monitors from 120350 up to 120350\n”,“file”:“/var/lib/mysql/mysqld-error.log”}
{“log”:“2023-11-09T04:18:05.166929Z 2 [Note] [MY-000000] [Galera] ####### My UUID: 71030c76-43c3-11ee-8d12-122c929ba694\n”,“file”:“/var/lib/mysql/mysqld-error.log”}
{“log”:“2023-11-09T04:18:05.166940Z 2 [Note] [MY-000000] [Galera] Skipping cert index reset\n”,“file”:“/var/lib/mysql/mysqld-error.log”}
{“log”:“2023-11-09T04:18:05.166951Z 2 [Note] [MY-000000] [Galera] REPL Protocols: 10 (5)\n”,“file”:“/var/lib/mysql/mysqld-error.log”}
{“log”:“2023-11-09T04:18:05.166960Z 2 [Note] [MY-000000] [Galera] ####### Adjusting cert position: 120350 → 120351\n”,“file”:“/var/lib/mysql/mysqld-error.log”}
{“log”:“2023-11-09T04:18:05.166991Z 0 [Note] [MY-000000] [Galera] Service thread queue flushed.\n”,“file”:“/var/lib/mysql/mysqld-error.log”}
{“log”:“2023-11-09T04:18:05.176175Z 2 [Note] [MY-000000] [Galera] ================================================\nView:\n id: 042e9eaf-383b-11ee-ad42-131a862e7064:120351\n status: primary\n protocol_version: 4\n capabilities: MULTI-MASTER, CERTIFICATION, PARALLEL_APPLYING, REPLAY, ISOLATION, PAUSE, CAUSAL_READ, INCREMENTAL_WS, UNORDERED, PREORDERED, STREAMING, NBO\n final: no\n own_index: 2\n members(3):\n\t0: 33d87cf9-46a4-11ee-a8bc-3f7b432d418e, cluster1-pxc-0\n\t1: 540e7622-3879-11ee-9716-ab8c1cca7536, cluster1-pxc-1\n\t2: 71030c76-43c3-11ee-8d12-122c929ba694, cluster1-pxc-2\n=================================================\n”,“file”:“/var/lib/mysql/mysqld-error.log”}
{“log”:“2023-11-09T04:18:05.176271Z 2 [Note] [MY-000000] [WSREP] wsrep_notify_cmd is not defined, skipping notification.\n”,“file”:“/var/lib/mysql/mysqld-error.log”}
{“log”:“2023-11-09T04:18:05.182882Z 2 [Note] [MY-000000] [Galera] Recording CC from group: 120351\n”,“file”:“/var/lib/mysql/mysqld-error.log”}
{“log”:“2023-11-09T04:18:05.182963Z 2 [Note] [MY-000000] [Galera] Lowest cert index boundary for CC from group: 110074\n”,“file”:“/var/lib/mysql/mysqld-error.log”}
{“log”:“2023-11-09T04:18:05.182977Z 2 [Note] [MY-000000] [Galera] Min available from gcache for CC from group: 24\n”,“file”:“/var/lib/mysql/mysqld-error.log”}
{“log”:“2023-11-09T04:18:10.359906Z 0 [Note] [MY-000000] [Galera] cleaning up a975e53e-9d0e (ssl://10.130.28.50:4567)\n”,“file”:“/var/lib/mysql/mysqld-error.log”}
{“log”:“2023-11-10T01:01:07.633684Z 0 [Warning] [MY-000000] [Galera] Handshake failed: sslv3 alert certificate expired\n”,“file”:“/var/lib/mysql/mysqld-error.log”}
{“log”:“2023-11-10T01:01:09.133522Z 0 [Warning] [MY-000000] [Galera] Handshake failed: sslv3 alert certificate expired\n”,“file”:“/var/lib/mysql/mysqld-error.log”}
{“log”:“2023-11-10T01:01:10.634007Z 0 [Warning] [MY-000000] [Galera] Handshake failed: sslv3 alert certificate expired\n”,“file”:“/var/lib/mysql/mysqld-error.log”}
{“log”:“2023-11-10T01:01:12.133724Z 0 [Warning] [MY-000000] [Galera] Handshake failed: sslv3 alert certificate expired\n”,“file”:“/var/lib/mysql/mysqld-error.log”}
{“log”:“2023-11-10T01:01:13.633689Z 0 [Warning] [MY-000000] [Galera] Handshake failed: sslv3 alert certificate expired\n”,“file”:“/var/lib/mysql/mysqld-error.log”}
{“log”:“2023-11-10T01:01:15.134537Z 0 [Warning] [MY-000000] [Galera] Handshake failed: sslv3 alert certificate expired\n”,“file”:"/var/

Steps to Reproduce:

[Step-by-step instructions on how to reproduce the issue, including any specific settings or configurations]
I tried two differents backup Storage;
First:
kind: PerconaXtraDBClusterBackup
apiVersion: pxc.percona.com/v1
metadata:
name: backup1
namespace: s-devops
spec:
pxcCluster: cluster1
storageName: fs-pvc
The error is below:
2023-11-20 13:22:57 [INFO] Garbd was started

  • garbd --address ‘gcomm://cluster1-pxc-2.cluster1-pxc?gmcast.listen_addr=tcp://0.0.0.0:4567’ --donor cluster1-pxc-2 --group cluster1-pxc --options ‘socket.ssl_ca=/etc/mysql/ssl-internal/ca.crt;socket.ssl_cert=/etc/mysql/ssl-internal/tls.crt;socket.ssl_key=/etc/mysql/ssl-internal/tls.key;socket.ssl_cipher=;pc.weight=0;’ --sst xtrabackup-v2:10.129.20.122:4444/xtrabackup_sst//1 --recv-script=/usr/bin/run_backup.sh
    2023-11-20 13:22:57.700 INFO: CRC-32C: using 64-bit x86 acceleration.
    2023-11-20 13:22:57.700 INFO: Read config:
    daemon: 0
    name: garb
    address: gcomm://cluster1-pxc-2.cluster1-pxc?gmcast.listen_addr=tcp://0.0.0.0:4567
    group: cluster1-pxc
    sst: xtrabackup-v2:10.129.20.122:4444/xtrabackup_sst//1
    donor: cluster1-pxc-2
    options: socket.ssl_ca=/etc/mysql/ssl-internal/ca.crt;socket.ssl_cert=/etc/mysql/ssl-internal/tls.crt;socket.ssl_key=/etc/mysql/ssl-internal/tls.key;socket.ssl_cipher=;pc.weight=0;; gcs.fc_limit=9999999; gcs.fc_factor=1.0; gcs.fc_single_primary=yes; socket.ssl=YES
    cfg:
    log:
    recv_script: /usr/bin/run_backup.sh
    workdir:

2023-11-20 13:22:57.700 WARN: SSL compression is not effective. The option socket.ssl_compression is deprecated and will be removed in future releases.
2023-11-20 13:22:57.700 WARN: Parameter ‘socket.ssl_compression’ is deprecated and will be removed in future versions
2023-11-20 13:22:57.705 INFO: protonet asio version 0
2023-11-20 13:22:57.707 INFO: Using CRC-32C for message checksums.
2023-11-20 13:22:57.707 INFO: backend: asio
2023-11-20 13:22:57.707 INFO: gcomm thread scheduling priority set to other:0
2023-11-20 13:22:57.707 INFO: Fail to access the file (./gvwstate.dat) error (No such file or directory). It is possible if node is booting for first time or re-booting after a graceful shutdown
2023-11-20 13:22:57.707 INFO: Restoring primary-component from disk failed. Either node is booting for first time or re-booting after a graceful shutdown
2023-11-20 13:22:57.708 INFO: GMCast version 0
2023-11-20 13:22:57.711 INFO: (eb99d2a4-b379, ‘ssl://0.0.0.0:4567’) listening at ssl://0.0.0.0:4567
2023-11-20 13:22:57.711 INFO: (eb99d2a4-b379, ‘ssl://0.0.0.0:4567’) multicast: , ttl: 1
2023-11-20 13:22:57.711 INFO: EVS version 1
2023-11-20 13:22:57.711 INFO: gcomm: connecting to group ‘cluster1-pxc’, peer ‘cluster1-pxc-2.cluster1-pxc:’
2023-11-20 13:23:00.713 INFO: announce period timed out (pc.announce_timeout)
2023-11-20 13:23:00.713 INFO: EVS version upgrade 0 → 1
2023-11-20 13:23:00.713 INFO: PC protocol upgrade 0 → 1
2023-11-20 13:23:00.713 WARN: no nodes coming from prim view, prim not possible
2023-11-20 13:23:00.713 INFO: Current view of cluster as seen by this node
view (view_id(NON_PRIM,eb99d2a4-b379,1)
memb {
eb99d2a4-b379,0
}
joined {
}
left {
}
partitioned {
}
)
2023-11-20 13:23:01.213 WARN: last inactive check more than PT1.5S (3*evs.inactive_check_period) ago (PT3.50218S), skipping check
2023-11-20 13:23:30.729 INFO: Current view of cluster as seen by this node
view (view_id(NON_PRIM,eb99d2a4-b379,1)
memb {
eb99d2a4-b379,0
}
joined {
}
left {
}
partitioned {
}
)
2023-11-20 13:23:30.729 INFO: PC protocol downgrade 1 → 0
2023-11-20 13:23:30.729 INFO: Current view of cluster as seen by this node
view ((empty))
2023-11-20 13:23:30.729 ERROR: failed to open gcomm backend connection: 110: failed to reach primary view (pc.wait_prim_timeout): 110 (Connection timed out)
at gcomm/src/pc.cpp:connect():161
2023-11-20 13:23:30.729 ERROR: gcs/src/gcs_core.cpp:gcs_core_open():219: Failed to open backend connection: -110 (Connection timed out)
2023-11-20 13:23:31.729 INFO: gcomm: terminating thread
2023-11-20 13:23:31.729 INFO: gcomm: joining thread
2023-11-20 13:23:31.729 ERROR: gcs/src/gcs.cpp:gcs_open():1811: Failed to open channel ‘cluster1-pxc’ at ‘gcomm://cluster1-pxc-2.cluster1-pxc?gmcast.listen_addr=tcp://0.0.0.0:4567’: -110 (Connection timed out)
2023-11-20 13:23:31.729 INFO: Shifting CLOSED → DESTROYED (TO: 0)
2023-11-20 13:23:31.729 FATAL: Garbd exiting with error: Failed to open connection to group: 110 (Connection timed out)
at garb/garb_gcs.cpp:Gcs():35

  • EXID_CODE=1
  • ‘[’ -f /tmp/backup-is-completed ‘]’
  • log ERROR ‘Backup was finished unsuccessfull’
    2023-11-20 13:23:31 [ERROR] Backup was finished unsuccessfull
  • exit 1
    Second:
    kind: PerconaXtraDBClusterBackup
    apiVersion: pxc.percona.com/v1
    metadata:
    name: backup10
    namespace: s-devops
    spec:
    pxcCluster: cluster1
    storageName: s3-us-west

The error logs are below:
2023-11-20 13:25:17.463 WARN: SSL compression is not effective. The option socket.ssl_compression is deprecated and will be removed in future releases.
2023-11-20 13:25:17.463 WARN: Parameter ‘socket.ssl_compression’ is deprecated and will be removed in future versions
2023-11-20 13:25:17.468 INFO: protonet asio version 0
2023-11-20 13:25:17.469 INFO: Using CRC-32C for message checksums.
2023-11-20 13:25:17.469 INFO: backend: asio
2023-11-20 13:25:17.470 INFO: gcomm thread scheduling priority set to other:0
2023-11-20 13:25:17.470 INFO: Fail to access the file (./gvwstate.dat) error (No such file or directory). It is possible if node is booting for first time or re-booting after a graceful shutdown
2023-11-20 13:25:17.470 INFO: Restoring primary-component from disk failed. Either node is booting for first time or re-booting after a graceful shutdown
2023-11-20 13:25:17.470 INFO: GMCast version 0
2023-11-20 13:25:17.471 INFO: (3ee7e084-96df, ‘ssl://0.0.0.0:4567’) listening at ssl://0.0.0.0:4567
2023-11-20 13:25:17.471 INFO: (3ee7e084-96df, ‘ssl://0.0.0.0:4567’) multicast: , ttl: 1
2023-11-20 13:25:17.472 INFO: EVS version 1
2023-11-20 13:25:17.472 INFO: gcomm: connecting to group ‘cluster1-pxc’, peer ‘cluster1-pxc-2.cluster1-pxc:’
2023-11-20 13:25:20.473 INFO: announce period timed out (pc.announce_timeout)
2023-11-20 13:25:20.474 INFO: EVS version upgrade 0 → 1
2023-11-20 13:25:20.474 INFO: PC protocol upgrade 0 → 1
2023-11-20 13:25:20.474 WARN: no nodes coming from prim view, prim not possible
2023-11-20 13:25:20.474 INFO: Current view of cluster as seen by this node
view (view_id(NON_PRIM,3ee7e084-96df,1)
memb {
3ee7e084-96df,0
}
joined {
}
left {
}
partitioned {
}
)
2023-11-20 13:25:20.974 WARN: last inactive check more than PT1.5S (3*evs.inactive_check_period) ago (PT3.50193S), skipping check
2023-11-20 13:25:50.492 INFO: Current view of cluster as seen by this node
view (view_id(NON_PRIM,3ee7e084-96df,1)
memb {
3ee7e084-96df,0
}
joined {
}
left {
}
partitioned {
}
)
2023-11-20 13:25:50.492 INFO: PC protocol downgrade 1 → 0
2023-11-20 13:25:50.492 INFO: Current view of cluster as seen by this node
view ((empty))
2023-11-20 13:25:50.492 ERROR: failed to open gcomm backend connection: 110: failed to reach primary view (pc.wait_prim_timeout): 110 (Connection timed out)
at gcomm/src/pc.cpp:connect():161
2023-11-20 13:25:50.492 ERROR: gcs/src/gcs_core.cpp:gcs_core_open():219: Failed to open backend connection: -110 (Connection timed out)
2023-11-20 13:25:51.493 INFO: gcomm: terminating thread
2023-11-20 13:25:51.493 INFO: gcomm: joining thread
2023-11-20 13:25:51.493 ERROR: gcs/src/gcs.cpp:gcs_open():1811: Failed to open channel ‘cluster1-pxc’ at ‘gcomm://cluster1-pxc-2.cluster1-pxc?gmcast.listen_addr=tcp://0.0.0.0:4567’: -110 (Connection timed out)
2023-11-20 13:25:51.493 INFO: Shifting CLOSED → DESTROYED (TO: 0)
2023-11-20 13:25:51.493 FATAL: Garbd exiting with error: Failed to open connection to group: 110 (Connection timed out)
at garb/garb_gcs.cpp:Gcs():35

  • EXID_CODE=1
  • ‘[’ -f /tmp/backup-is-completed ‘]’
  • log ERROR ‘Backup was finished unsuccessfull’
    2023-11-20 13:25:51 [ERROR] Backup was finished unsuccessfull
  • exit 1

Expected Result:

[What the user expected to see or happen before the issue occurred]

Actual Result:

[What actually happened when the user encountered the issue]

Additional Information:

[Include any additional information that could be helpful to diagnose the issue, such as browser or device information]

Handshake failed: sslv3 alert certificate expired

Can you fix this issue within your cluster and try the backup again? You should be able to temporarily disable SSL altogether for a test then re-enable with corrected certs.