SST 32 Broken Pipe when adding second node on bootsrap cluster

I’ve successfully bootstrapped the first node and it is active. The firewall allows all traffic on all ports between the two servers. I’ve copied the ca.pem, server-cert.pem, server-key.pem files from the /var/lib/mysql directory on node1 into node 2 at /etc/mysql/certs and have made the mysql service the owner of the copied certs for node 2.

Here’s the log output I receive:
2021-12-04T23:12:01.413230Z 0 [Note] [MY-000000] [Galera] Service thread queue flushed.
2021-12-04T23:12:01.413394Z 0 [Note] [MY-000000] [Galera] ####### Assign initial position for certification: c160989b-5552-11ec-9f38-dad52a103d84:1, protocol version: -1
2021-12-04T23:12:01.413509Z 0 [Note] [MY-000000] [WSREP] Starting replication
2021-12-04T23:12:01.413601Z 0 [Note] [MY-000000] [Galera] Connecting with bootstrap option: 0
2021-12-04T23:12:01.413697Z 0 [Note] [MY-000000] [Galera] Setting GCS initial position to c160989b-5552-11ec-9f38-dad52a103d84:1
2021-12-04T23:12:01.413848Z 0 [Note] [MY-000000] [Galera] protonet asio version 0
2021-12-04T23:12:01.414058Z 0 [Note] [MY-000000] [Galera] Using CRC-32C for message checksums.
2021-12-04T23:12:01.414175Z 0 [Note] [MY-000000] [Galera] initializing ssl context
2021-12-04T23:12:01.414473Z 0 [Note] [MY-000000] [Galera] backend: asio
2021-12-04T23:12:01.414648Z 0 [Note] [MY-000000] [Galera] gcomm thread scheduling priority set to other:0
2021-12-04T23:12:01.414823Z 0 [Warning] [MY-000000] [Galera] Fail to access the file (/var/lib/mysql//gvwstate.dat) error (No such file or directory). It is possible if node is booting for first time or re-booting after a graceful shutdown
2021-12-04T23:12:01.414946Z 0 [Note] [MY-000000] [Galera] Restoring primary-component from disk failed. Either node is booting for first time or re-booting after a graceful shutdown
2021-12-04T23:12:01.415219Z 0 [Note] [MY-000000] [Galera] GMCast version 0
2021-12-04T23:12:01.415492Z 0 [Note] [MY-000000] [Galera] (96528c37-9f8d, ‘ssl://’) listening at ssl://
2021-12-04T23:12:01.415615Z 0 [Note] [MY-000000] [Galera] (96528c37-9f8d, ‘ssl://’) multicast: , ttl: 1
2021-12-04T23:12:01.416008Z 0 [Note] [MY-000000] [Galera] EVS version 1
2021-12-04T23:12:01.416208Z 0 [Note] [MY-000000] [Galera] gcomm: connecting to group ‘pxc-cluster’, peer ‘,,’
2021-12-04T23:12:01.425048Z 0 [Note] [MY-000000] [Galera] SSL handshake successful, remote endpoint ssl:// local endpoint ssl:// cipher: TLS_AES_256_GCM_SHA384 compression: none
2021-12-04T23:12:01.425629Z 0 [Note] [MY-000000] [Galera] SSL handshake successful, remote endpoint ssl:// local endpoint ssl:// cipher: TLS_AES_256_GCM_SHA384 compression: none
2021-12-04T23:12:01.427306Z 0 [Note] [MY-000000] [Galera] SSL handshake successful, remote endpoint ssl:// local endpoint ssl:// cipher: TLS_AES_256_GCM_SHA384 compression: none
2021-12-04T23:12:01.427491Z 0 [Note] [MY-000000] [Galera] (96528c37-9f8d, ‘ssl://’) Found matching local endpoint for a connection, blacklisting address ssl://
2021-12-04T23:12:01.428895Z 0 [Note] [MY-000000] [Galera] (96528c37-9f8d, ‘ssl://’) connection established to 7929fac9-9d41 ssl://
2021-12-04T23:12:01.429143Z 0 [Note] [MY-000000] [Galera] (96528c37-9f8d, ‘ssl://’) turning message relay requesting on, nonlive peers:
2021-12-04T23:12:01.920869Z 0 [Note] [MY-000000] [Galera] EVS version upgrade 0 → 1
2021-12-04T23:12:01.921122Z 0 [Note] [MY-000000] [Galera] declaring 7929fac9-9d41 at ssl:// stable
2021-12-04T23:12:01.921284Z 0 [Note] [MY-000000] [Galera] PC protocol upgrade 0 → 1
2021-12-04T23:12:01.921985Z 0 [Note] [MY-000000] [Galera] Node 7929fac9-9d41 state primary
2021-12-04T23:12:01.922945Z 0 [Note] [MY-000000] [Galera] Current view of cluster as seen by this node
view (view_id(PRIM,7929fac9-9d41,2)
memb {
joined {
left {
partitioned {
2021-12-04T23:12:01.923114Z 0 [Note] [MY-000000] [Galera] Save the discovered primary-component to disk
2021-12-04T23:12:01.925445Z 0 [Note] [MY-000000] [Galera] discarding pending addr without UUID: ssl://
2021-12-04T23:12:02.419012Z 0 [Note] [MY-000000] [Galera] gcomm: connected
2021-12-04T23:12:02.419326Z 0 [Note] [MY-000000] [Galera] Changing maximum packet size to 64500, resulting msg size: 32636
2021-12-04T23:12:02.419552Z 0 [Note] [MY-000000] [Galera] Shifting CLOSED → OPEN (TO: 0)
2021-12-04T23:12:02.419755Z 0 [Note] [MY-000000] [Galera] Opened channel ‘pxc-cluster’
2021-12-04T23:12:02.420248Z 1 [Note] [MY-000000] [WSREP] Starting applier thread 1
2021-12-04T23:12:02.420543Z 2 [Note] [MY-000000] [WSREP] Starting rollbacker thread 2
2021-12-04T23:12:02.420816Z 0 [Note] [MY-000000] [Galera] New COMPONENT: primary = yes, bootstrap = no, my_idx = 1, memb_num = 2
2021-12-04T23:12:02.421027Z 0 [Note] [MY-000000] [Galera] STATE EXCHANGE: Waiting for state UUID.
2021-12-04T23:12:02.421367Z 0 [Note] [MY-000000] [Galera] STATE EXCHANGE: sent state msg: 969b9aeb-5557-11ec-87fd-ff907c91e781
2021-12-04T23:12:02.421617Z 0 [Note] [MY-000000] [Galera] STATE EXCHANGE: got state msg: 969b9aeb-5557-11ec-87fd-ff907c91e781 from 0 (pxc-1)
2021-12-04T23:12:02.421989Z 0 [Note] [MY-000000] [Galera] STATE EXCHANGE: got state msg: 969b9aeb-5557-11ec-87fd-ff907c91e781 from 1 (pxc-2)
2021-12-04T23:12:02.422205Z 0 [Note] [MY-000000] [Galera] Quorum results:
version = 6,
component = PRIMARY,
conf_id = 1,
members = 1/2 (primary/total),
act_id = 2,
last_appl. = 1,
protocols = 2/10/4 (gcs/repl/appl),
vote policy= 0,
group UUID = 0aad7cc0-5550-11ec-8127-e663ae9bb018
2021-12-04T23:12:02.422459Z 0 [Note] [MY-000000] [Galera] Flow-control interval: [141, 141]
2021-12-04T23:12:02.422662Z 0 [Note] [MY-000000] [Galera] Shifting OPEN → PRIMARY (TO: 3)
2021-12-04T23:12:02.422916Z 1 [Note] [MY-000000] [Galera] ####### processing CC 3, local, ordered
2021-12-04T23:12:02.423141Z 1 [Note] [MY-000000] [Galera] Maybe drain monitors from 1 upto current CC event 3 upto:1
2021-12-04T23:12:02.423355Z 1 [Note] [MY-000000] [Galera] Drain monitors from 1 up to 1
2021-12-04T23:12:02.423583Z 1 [Note] [MY-000000] [Galera] Process first view: 0aad7cc0-5550-11ec-8127-e663ae9bb018 my uuid: 96528c37-5557-11ec-9f8d-16f32bd0f4eb
2021-12-04T23:12:02.423827Z 1 [Note] [MY-000000] [Galera] Server pxc-2 connected to cluster at position 0aad7cc0-5550-11ec-8127-e663ae9bb018:3 with ID 96528c37-5557-11ec-9f8d-16f32bd0f4eb
2021-12-04T23:12:02.424055Z 1 [Note] [MY-000000] [WSREP] Server status change disconnected → connected
2021-12-04T23:12:02.424290Z 1 [Note] [MY-000000] [WSREP] wsrep_notify_cmd is not defined, skipping notification.
2021-12-04T23:12:02.424557Z 1 [Note] [MY-000000] [Galera] ####### My UUID: 96528c37-5557-11ec-9f8d-16f32bd0f4eb
2021-12-04T23:12:02.424791Z 1 [Note] [MY-000000] [Galera] Cert index reset to 00000000-0000-0000-0000-000000000000:-1 (proto: 10), state transfer needed: yes
2021-12-04T23:12:02.425049Z 0 [Note] [MY-000000] [Galera] Service thread queue flushed.
2021-12-04T23:12:02.425304Z 1 [Note] [MY-000000] [Galera] ####### Assign initial position for certification: 00000000-0000-0000-0000-000000000000:-1, protocol version: -1
2021-12-04T23:12:02.425543Z 1 [Note] [MY-000000] [Galera] State transfer required:
Group state: 0aad7cc0-5550-11ec-8127-e663ae9bb018:3
Local state: c160989b-5552-11ec-9f38-dad52a103d84:1
2021-12-04T23:12:02.425789Z 1 [Note] [MY-000000] [WSREP] Server status change connected → joiner
2021-12-04T23:12:02.426043Z 1 [Note] [MY-000000] [WSREP] wsrep_notify_cmd is not defined, skipping notification.
2021-12-04T23:12:02.426415Z 0 [Note] [MY-000000] [WSREP] Initiating SST/IST transfer on JOINER side (wsrep_sst_xtrabackup-v2 --role ‘joiner’ --address ‘’ --datadir ‘/var/lib/mysql/’ --basedir ‘/usr/’ --plugindir ‘/usr/lib/mysql/plugin/’ --defaults-file ‘/etc/mysql/my.cnf’ --defaults-group-suffix ‘’ --parent ‘16923’ --mysqld-version ‘8.0.25-15.1’ ‘’ )
2021-12-04T23:12:03.510608Z 1 [Note] [MY-000000] [WSREP] Prepared SST request: xtrabackup-v2|
2021-12-04T23:12:03.510958Z 1 [Note] [MY-000000] [Galera] Check if state gap can be serviced using IST
2021-12-04T23:12:03.511229Z 1 [Note] [MY-000000] [Galera] Local UUID: c160989b-5552-11ec-9f38-dad52a103d84 != Group UUID: 0aad7cc0-5550-11ec-8127-e663ae9bb018
2021-12-04T23:12:03.511504Z 1 [Note] [MY-000000] [Galera] ####### IST uuid:c160989b-5552-11ec-9f38-dad52a103d84 f: 0, l: 3, STRv: 3
2021-12-04T23:12:03.511885Z 1 [Note] [MY-000000] [Galera] IST receiver addr using ssl://
2021-12-04T23:12:03.512219Z 1 [Note] [MY-000000] [Galera] IST receiver using ssl
2021-12-04T23:12:03.512758Z 1 [Note] [MY-000000] [Galera] Prepared IST receiver for 0-3, listening at: ssl://
2021-12-04T23:12:03.516120Z 0 [Note] [MY-000000] [Galera] Member 1.0 (pxc-2) requested state transfer from ‘any’. Selected 0.0 (pxc-1)(SYNCED) as donor.
2021-12-04T23:12:03.516424Z 0 [Note] [MY-000000] [Galera] Shifting PRIMARY → JOINER (TO: 3)
2021-12-04T23:12:03.516751Z 1 [Note] [MY-000000] [Galera] Requesting state transfer: success, donor: 0
2021-12-04T23:12:03.517045Z 1 [Note] [MY-000000] [Galera] Resetting GCache seqno map due to different histories.
2021-12-04T23:12:03.517347Z 1 [Note] [MY-000000] [Galera] GCache history reset: c160989b-5552-11ec-9f38-dad52a103d84:1 → 0aad7cc0-5550-11ec-8127-e663ae9bb018:3
2021-12-04T23:12:03.518386Z 1 [Note] [MY-000000] [Galera] GCache DEBUG: RingBuffer::seqno_reset(): discarded 18446744073709551608 bytes
2021-12-04T23:12:03.518692Z 1 [Note] [MY-000000] [Galera] GCache DEBUG: RingBuffer::seqno_reset(): found 1/2 locked buffers
2021-12-04T23:12:04.919772Z 0 [Note] [MY-000000] [Galera] (96528c37-9f8d, ‘ssl://’) turning message relay requesting off
2021-12-04T23:13:43.378716Z 0 [Note] [MY-000000] [WSREP-SST] Trying to terminate (17382) socat -u openssl-listen:4444,reuseaddr,cert=/etc/mysql/certs/server-cert.pem,key=/etc/mysql/certs/server-key.pem,cafile=/etc/mysql/certs/ca.pem,verify=1,retry=30 stdio | /usr/bin/pxc_extra/pxb-8.0/bin/xbstream -x with SIGTERM
2021-12-04T23:13:44.397460Z 0 [ERROR] [MY-000000] [WSREP-SST] ******************* FATAL ERROR **********************
2021-12-04T23:13:44.398198Z 0 [ERROR] [MY-000000] [WSREP-SST] Possible timeout in receving first data from donor in gtid/keyring stage
2021-12-04T23:13:44.398216Z 0 [ERROR] [MY-000000] [WSREP-SST] Line 1286
2021-12-04T23:13:44.398230Z 0 [ERROR] [MY-000000] [WSREP-SST] ******************************************************
2021-12-04T23:13:44.398244Z 0 [ERROR] [MY-000000] [WSREP-SST] Cleanup after exit with status:32
2021-12-04T23:13:44.405295Z 0 [ERROR] [MY-000000] [WSREP] Process completed with error: wsrep_sst_xtrabackup-v2 --role ‘joiner’ --address ‘’ --datadir ‘/var/lib/mysql/’ --basedir ‘/usr/’ --plugindir ‘/usr/lib/mysql/plugin/’ --defaults-file ‘/etc/mysql/my.cnf’ --defaults-group-suffix ‘’ --parent ‘16923’ --mysqld-version ‘8.0.25-15.1’ ‘’ : 32 (Broken pipe)
2021-12-04T23:13:44.405397Z 0 [ERROR] [MY-000000] [WSREP] Failed to read uuid:seqno from joiner script.
2021-12-04T23:13:44.405423Z 0 [ERROR] [MY-000000] [WSREP] SST script aborted with error 32 (Broken pipe)
2021-12-04T23:13:44.405508Z 3 [Note] [MY-000000] [Galera] Processing SST received
2021-12-04T23:13:44.405546Z 3 [Note] [MY-000000] [Galera] SST request was cancelled
2021-12-04T23:13:44.405575Z 3 [ERROR] [MY-000000] [Galera] State transfer request failed unrecoverably: 32 (Broken pipe). Most likely it is due to inability to communicate with the cluster primary component. Restart required.
2021-12-04T23:13:44.405593Z 3 [Note] [MY-000000] [Galera] ReplicatorSMM::abort()
2021-12-04T23:13:44.405613Z 3 [Note] [MY-000000] [Galera] Closing send monitor…
2021-12-04T23:13:44.405633Z 3 [Note] [MY-000000] [Galera] Closed send monitor.
2021-12-04T23:13:44.405651Z 3 [Note] [MY-000000] [Galera] gcomm: terminating thread
2021-12-04T23:13:44.405673Z 3 [Note] [MY-000000] [Galera] gcomm: joining thread
2021-12-04T23:13:44.405747Z 3 [Note] [MY-000000] [Galera] gcomm: closing backend
2021-12-04T23:13:45.407397Z 3 [Note] [MY-000000] [Galera] Current view of cluster as seen by this node
view (view_id(NON_PRIM,7929fac9-9d41,2)
memb {
joined {
left {
partitioned {
2021-12-04T23:13:45.407507Z 3 [Note] [MY-000000] [Galera] PC protocol downgrade 1 → 0
2021-12-04T23:13:45.407532Z 3 [Note] [MY-000000] [Galera] Current view of cluster as seen by this node
view ((empty))
2021-12-04T23:13:45.407881Z 3 [Note] [MY-000000] [Galera] gcomm: closed
2021-12-04T23:13:45.407925Z 0 [Note] [MY-000000] [Galera] New COMPONENT: primary = no, bootstrap = no, my_idx = 0, memb_num = 1
2021-12-04T23:13:45.407990Z 0 [Note] [MY-000000] [Galera] Flow-control interval: [100, 100]
2021-12-04T23:13:45.408011Z 0 [Note] [MY-000000] [Galera] Received NON-PRIMARY.
2021-12-04T23:13:45.408028Z 0 [Note] [MY-000000] [Galera] Shifting JOINER → OPEN (TO: 3)
2021-12-04T23:13:45.408046Z 0 [Note] [MY-000000] [Galera] New SELF-LEAVE.
2021-12-04T23:13:45.408101Z 0 [Note] [MY-000000] [Galera] Flow-control interval: [0, 0]
2021-12-04T23:13:45.408118Z 0 [Note] [MY-000000] [Galera] Received SELF-LEAVE. Closing connection.
2021-12-04T23:13:45.408133Z 0 [Note] [MY-000000] [Galera] Shifting OPEN → CLOSED (TO: 3)
2021-12-04T23:13:45.408148Z 0 [Note] [MY-000000] [Galera] RECV thread exiting 0: Success
2021-12-04T23:13:45.408256Z 3 [Note] [MY-000000] [Galera] recv_thread() joined.
2021-12-04T23:13:45.408275Z 3 [Note] [MY-000000] [Galera] Closing replication queue.
2021-12-04T23:13:45.408289Z 3 [Note] [MY-000000] [Galera] Closing slave action queue.
2021-12-04T23:13:45.408310Z 3 [Note] [MY-000000] [Galera] /usr/sbin/mysqld: Terminated.
2021-12-04T23:13:45.408325Z 3 [Note] [MY-000000] [WSREP] Initiating SST cancellation
2021-12-04T23:13:45.408339Z 3 [Note] [MY-000000] [WSREP] Terminating SST process

Here my config for node #2:

I’ve been following the instructions from one of your technical writeups here:

I know I’m close(hopefully), thank you in advance!

1 Like

Here is my config for node #1

Website wouldn’t let me post more than 1 image since I am a “new user”. God forbid. LoL. Thanks again!!

1 Like

Verify networking by doing this:

joiner: socat - TCP-LISTEN:4444
donor: echo "hello" | socat - TCP:ip.adr.of.donor:4444

Then repeat, but invert the node commands.

1 Like

I am able to echo across both servers.

1 Like

I also just double checked to make sure the first node I bootstrapped with is running still. Here is what I get when I punch in “show status like ‘wsrep%’;”

1 Like

In your my.cnf, do you have wsrep_sst_auth? And did you create the user in the donor?

Here is what I have in my.cnf on node #1:


I have not added the wsrep_sst_auth into this file. The instructions say that is created automatically.

I have not created a user on the other nodes as the instructions say this is done automatically when provisioned correctly:

Obviously I missed something. I understood the instructions as once you bootstrap the first node, all you need to do is copy the certs over to the other nodes, change the configuration files on the other nodes to update the location of the certs, and to update node-name and node-ip-address, and as long as the network isnt blocking them, they should come online:

Thanks again. I’ve tried this multiple times and seem to always get stonewalled with the SSL part (I can get it to work with the encryption off)

1 Like

You have posted this question into the PXC 5.* forums, but you are reading/following the docs for PXC 8.* I assumed that since you posted in the 5.&* forum, you are using 5.7.* PXC. The docs are correct regarding wsrep_sst_auth IF you are using PXC8. What version are you using?

1 Like

Oh, sorry for the misunderstanding, my mistake.

This is version 8. I just downloaded and installed the package a few days ago from this URL:

1 Like

Ok. That’s good. If you enable encryption and it’s not working, then you have SSL certificate errors. Check permissions, paths, and my.cnf config of the certs.

I moved this topic to the correct subforum.

1 Like

I don’t store any of the configuration in the my.cnf:

All the configuration work has been done here

You can see the certificate paths are pointed to

Here are the permissions for the certificates

I bootstrapped the first node and then after the first node successfully started, I copied the certificates that were in /var/mysql over to this node. I updated the config file to look at these certs, as depicted in the screenshots.

From what I can tell the permissions are set right and its using the correct certificates. Does everything look right or no?

1 Like

The group should be mysql as well. Did you verify via show global variables like 'wsrep%' that the certs were loaded correctly on node1?

1 Like

I changed the group to mysql on node 2.

show global variables like ‘wsrep%’;

| wsrep_OSU_method | TOI |
| wsrep_RSU_commit_timeout | 5000 |
| wsrep_SR_store | table |
| wsrep_auto_increment_control | ON |
| wsrep_causal_reads | OFF |
| wsrep_certification_rules | strict |
| wsrep_certify_nonPK | ON |
| wsrep_cluster_address | gcomm://,, |
| wsrep_cluster_name | pxc-cluster |
| wsrep_data_home_dir | /var/lib/mysql/ |
| wsrep_dbug_option | |
| wsrep_debug | NONE |
| wsrep_desync | OFF |
| wsrep_dirty_reads | OFF |
| wsrep_ignore_apply_errors | 0 |
| wsrep_load_data_splitting | OFF |
| wsrep_log_conflicts | ON |
| wsrep_max_ws_rows | 0 |
| wsrep_max_ws_size | 2147483647 |
| wsrep_min_log_verbosity | 3 |
| wsrep_node_address | |
| wsrep_node_incoming_address | AUTO |
| wsrep_node_name | pxc-1 |
| wsrep_notify_cmd | |
| wsrep_provider | /usr/lib/galera4/ |
| wsrep_provider_options | base_dir = /var/lib/mysql/; base_host =; base_port = 4567; cert.log_conflicts = no; cert.optimistic_pa = no; debug = no; evs.auto_evict = 0; evs.causal_keepalive_period = PT1S; evs.debug_log_mask = 0x1; evs.delay_margin = PT1S; evs.delayed_keep_period = PT30S; evs.inactive_check_period = PT0.5S; evs.inactive_timeout = PT15S; evs.info_log_mask = 0; evs.install_timeout = PT7.5S; evs.join_retrans_period = PT1S; evs.keepalive_period = PT1S; evs.max_install_timeouts = 3; evs.send_window = 10; evs.stats_report_period = PT1M; evs.suspect_timeout = PT5S; evs.use_aggregate = true; evs.user_send_window = 4; evs.version = 1; evs.view_forget_timeout = P1D; gcache.dir = /var/lib/mysql/; gcache.freeze_purge_at_seqno = -1; gcache.keep_pages_count = 0; gcache.keep_pages_size = 0; gcache.mem_size = 0; = galera.cache; gcache.page_size = 128M; gcache.recover = yes; gcache.size = 128M; gcomm.thread_prio = ; gcs.fc_debug = 0; gcs.fc_factor = 1.0; gcs.fc_limit = 100; gcs.fc_master_slave = no; gcs.max_packet_size = 64500; gcs.max_throttle = 0.25; gcs.recv_q_hard_limit = 9223372036854775807; gcs.recv_q_soft_limit = 0.25; gcs.sync_donor = no; gmcast.listen_addr = ssl://; gmcast.mcast_addr = ; gmcast.mcast_ttl = 1; gmcast.peer_timeout = PT3S; gmcast.segment = 0; gmcast.time_wait = PT5S; gmcast.version = 0; ist.recv_addr =; pc.announce_timeout = PT3S; pc.checksum = false; pc.ignore_quorum = false; pc.ignore_sb = false; pc.linger = PT20S; pc.npvo = false; pc.recovery = true; pc.version = 0; pc.wait_prim = true; pc.wait_prim_timeout = PT30S; pc.weight = 1; protonet.backend = asio; protonet.version = 0; repl.causal_read_timeout = PT30S; repl.commit_order = 3; repl.key_format = FLAT8; repl.max_ws_size = 2147483647; repl.proto_max = 10; socket.checksum = 2; socket.recv_buf_size = auto; socket.send_buf_size = auto; socket.ssl = YES; socket.ssl_ca = ca.pem; socket.ssl_cert = server-cert.pem; socket.ssl_cipher = ; socket.ssl_compression = YES; socket.ssl_key = server-key.pem; |

1 Like

| wsrep_recover | OFF |
| wsrep_reject_queries | NONE |
| wsrep_replicate_myisam | OFF |
| wsrep_restart_slave | OFF |
| wsrep_retry_autocommit | 1 |
| wsrep_slave_FK_checks | ON |
| wsrep_slave_UK_checks | OFF |
| wsrep_slave_threads | 8 |
| wsrep_sst_allowed_methods | xtrabackup-v2 |
| wsrep_sst_donor | |
| wsrep_sst_donor_rejects_queries | OFF |
| wsrep_sst_method | xtrabackup-v2 |
| wsrep_sst_receive_address | AUTO |
| wsrep_start_position | 0aad7cc0-5550-11ec-8127-e663ae9bb018:1 |
| wsrep_sync_wait | 0 |
| wsrep_trx_fragment_size | 0 |
| wsrep_trx_fragment_unit | bytes

I assume the socket.ssl = YES means node 1 has loaded the certs correctly?

Also might just be semantic, but you say ‘loaded the certs correctly’, but node 1 generated the certs into the /var/lib/mysql directory, and it is still using that location. Just want to make sure you dont think I am having node 1 running them in the /etc/mysql/certs directory yet (trying to get node 2 connected before I redirect to there)

1 Like

What do the logs for node1 say while you are trying to start node2?

1 Like

Here are the logs for node1:

Hopefully this gives a better idea of what is happening:

Thanks again.

1 Like

Just to follow up here.

It seems like the error node 1 is throwing in the logs is this:

2021-12-07T23:18:26.341128Z 0 [Warning] [MY-000000] [Galera] Member 0.0 (pxc-2) requested state transfer from ‘any’, but it is impossible to select State Transfer donor: Resource temporarily unavailable

Just to reiterate - I’ve only bootstrapped the first node, left it running, copied the creds that were in /var/lib/mysql over to the second node, used the same config on the second node except changing the node name, ip address, and the pathing here:

1 Like

but it is impossible to select State Transfer donor: Resource temporarily unavailable

Not sure what is going on here. You’ve got something strange at work with PXC and your OS/network. I only saw this error one other time and it was because they had specified wsrep_sst_donor via IP instead of hostname.

1 Like

Could it be that you run into the following problem? [PXC-3679] SST fails after the update of socat to '' - Percona JIRA

1 Like