One node crash/hang, other node become non-primary

Hello,

I have a running cluster with 3 nodes. But if one node hang/crash the other node become non-primary and the cluster being failed. This case not happen were gracefully shut down.

This is the chronology crash :
- I’m losing one node A because of hardware/vm hang. 
- The other node B and node C become non-primary, and the cluster failed or cannot used.
- The cluster become normal again after the hanging node A restart and the MySQL service on that node started to join the cluster.

Node A : cluster-node1 = 10.30.1.43
Node B: cluster-node2  = 10.30.1.44
Node C: cluster-node3 = 10.30.1.30

Node B

Node C

After hang, on node A 

2020-06-24T09:21:27.767684+07:00 0 [Note] /usr/sbin/mysqld (mysqld 5.7.23) starting as process 1405 …
2020-06-24T09:21:27.774060+07:00 0 [Note] WSREP: Read nil XID from storage engines, skipping position init
2020-06-24T09:21:27.774106+07:00 0 [Note] WSREP: wsrep_load(): loading provider library ‘/usr/lib/galera/libgalera_smm.so’
2020-06-24T09:21:27.918050+07:00 0 [Note] WSREP: wsrep_load(): Galera 3.24(rf216443) by Codership Oy <info@codership.com> loaded successfully.
2020-06-24T09:21:27.918137+07:00 0 [Note] WSREP: CRC-32C: using hardware acceleration.
2020-06-24T09:21:27.932343+07:00 0 [Note] WSREP: Found saved state: 28b84a9c-b8b0-11e7-bcc1-ab2b6eeffa33:-1, safe_to_bootstrap: 0
2020-06-24T09:21:29.715429+07:00 0 [Note] WSREP: Passing config to GCS: base_dir = /var/lib/mysql/; base_host = 10.30.1.43; base_port = 4567; cert.log_conflicts = YES; debug = no; evs.auto_evict = 0; evs.delay_margin = PT1S; evs.delayed_keep_period = PT30S; evs.inactive_check_period = PT0.5S; evs.inactive_timeout = PT15S; evs.join_retrans_period = PT1S; evs.max_install_timeouts = 3; evs.send_window = 4; evs.stats_report_period = PT1M; evs.suspect_timeout = PT5S; evs.user_send_window = 2; evs.view_forget_timeout = PT24H; gcache.dir = /var/lib/mysql/; gcache.keep_pages_size = 0; gcache.mem_size = 0; gcache.name = /var/lib/mysql//galera.cache; gcache.page_size = 128M; gcache.recover = no; gcache.size = 128M; gcomm.thread_prio = ; gcs.fc_debug = 0; gcs.fc_factor = 1.0; gcs.fc_limit = 16; gcs.fc_master_slave = no; gcs.max_packet_size = 64500; gcs.max_throttle = 0.25; gcs.recv_q_hard_limit = 9223372036854775807; gcs.recv_q_soft_limit = 0.25; gcs.sync_donor = no; gmcast.segment = 0; gmcast.version = 0; pc.announce_timeout = PT3S; pc.checksum = false; pc.ig
2020-06-24T09:21:29.789860+07:00 0 [Note] WSREP: GCache history reset: 28b84a9c-b8b0-11e7-bcc1-ab2b6eeffa33:0 -> 28b84a9c-b8b0-11e7-bcc1-ab2b6eeffa33:9582329
2020-06-24T09:21:29.813620+07:00 0 [Note] WSREP: Assign initial position for certification: 9582329, protocol version: -1
2020-06-24T09:21:29.813663+07:00 0 [Note] WSREP: wsrep_sst_grab()
2020-06-24T09:21:29.813674+07:00 0 [Note] WSREP: Start replication
2020-06-24T09:21:29.813693+07:00 0 [Note] WSREP: Setting initial position to 28b84a9c-b8b0-11e7-bcc1-ab2b6eeffa33:9582329
2020-06-24T09:21:29.813845+07:00 0 [Note] WSREP: protonet asio version 0
2020-06-24T09:21:29.814070+07:00 0 [Note] WSREP: Using CRC-32C for message checksums.
2020-06-24T09:21:29.816491+07:00 0 [Note] WSREP: backend: asio
2020-06-24T09:21:29.816634+07:00 0 [Note] WSREP: gcomm thread scheduling priority set to other:0
2020-06-24T09:21:29.838599+07:00 0 [Note] WSREP: restore pc from disk successfully
2020-06-24T09:21:29.839001+07:00 0 [Note] WSREP: GMCast version 0
2020-06-24T09:21:29.839471+07:00 0 [Note] WSREP: (1ceb3dad, ‘tcp://0.0.0.0:4567’) listening at tcp://0.0.0.0:4567
2020-06-24T09:21:29.839496+07:00 0 [Note] WSREP: (1ceb3dad, ‘tcp://0.0.0.0:4567’) multicast: , ttl: 1
2020-06-24T09:21:29.840169+07:00 0 [Note] WSREP: EVS version 0
2020-06-24T09:21:29.840354+07:00 0 [Note] WSREP: gcomm: connecting to group ‘cluster-mysql’, peer ‘10.30.1.43:,10.30.1.44:,10.30.1.30:’
2020-06-24T09:21:29.843474+07:00 0 [Note] WSREP: (1ceb3dad, ‘tcp://0.0.0.0:4567’) connection established to 1ceb3dad tcp://10.30.1.43:4567
2020-06-24T09:21:29.843559+07:00 0 [Warning] WSREP: (1ceb3dad, ‘tcp://0.0.0.0:4567’) address ‘tcp://10.30.1.43:4567’ points to own listening address, blacklisting
2020-06-24T09:21:29.845197+07:00 0 [Note] WSREP: (1ceb3dad, ‘tcp://0.0.0.0:4567’) connection established to 2f81a4e3 tcp://10.30.1.44:4567
2020-06-24T09:21:29.845290+07:00 0 [Note] WSREP: (1ceb3dad, ‘tcp://0.0.0.0:4567’) turning message relay requesting on, nonlive peers:
2020-06-24T09:21:29.845369+07:00 0 [Note] WSREP: (1ceb3dad, ‘tcp://0.0.0.0:4567’) connection established to b48606bd tcp://10.30.1.30:4567
2020-06-24T09:21:30.345175+07:00 0 [Note] WSREP: declaring 2f81a4e3 at tcp://10.30.1.44:4567 stable
2020-06-24T09:21:30.345216+07:00 0 [Note] WSREP: declaring b48606bd at tcp://10.30.1.30:4567 stable
2020-06-24T09:21:30.346162+07:00 0 [Note] WSREP: re-bootstrapping prim from partitioned components
2020-06-24T09:21:30.347135+07:00 0 [Note] WSREP: view(view_id(PRIM,1ceb3dad,623) memb {
1ceb3dad,0
2f81a4e3,0
b48606bd,0
} joined {
} left {
} partitioned {
})
2020-06-24T09:21:30.347175+07:00 0 [Note] WSREP: save pc into disk
2020-06-24T09:21:30.390407+07:00 0 [Note] WSREP: clear restored view
2020-06-24T09:21:30.842526+07:00 0 [Note] WSREP: gcomm: connected
2020-06-24T09:21:30.842643+07:00 0 [Note] WSREP: Changing maximum packet size to 64500, resulting msg size: 32636
2020-06-24T09:21:30.842809+07:00 0 [Note] WSREP: Shifting CLOSED -> OPEN (TO: 0)
2020-06-24T09:21:30.842829+07:00 0 [Note] WSREP: Opened channel ‘cluster-mysql’
2020-06-24T09:21:30.842935+07:00 0 [Note] WSREP: New COMPONENT: primary = yes, bootstrap = no, my_idx = 0, memb_num = 3
2020-06-24T09:21:30.843040+07:00 0 [Note] WSREP: Waiting for SST to complete.
2020-06-24T09:21:30.843923+07:00 0 [Note] WSREP: STATE_EXCHANGE: sent state UUID: 6a81d310-b5c1-11ea-90ec-7eeb19ee9aa6
2020-06-24T09:21:30.845347+07:00 0 [Note] WSREP: STATE EXCHANGE: sent state msg: 6a81d310-b5c1-11ea-90ec-7eeb19ee9aa6
2020-06-24T09:21:30.846068+07:00 0 [Note] WSREP: STATE EXCHANGE: got state msg: 6a81d310-b5c1-11ea-90ec-7eeb19ee9aa6 from 0 (cluster-node1)
2020-06-24T09:21:30.846099+07:00 0 [Note] WSREP: STATE EXCHANGE: got state msg: 6a81d310-b5c1-11ea-90ec-7eeb19ee9aa6 from 1

Dear @hendramaulana,
If your question is still relevant, please post config files for each node.