PXC 8.0 - may not be safe to bootstrap the cluster from this node & and set safe_to_bootstrap to 1

Hi there,
I have a 3 node cluster( pxc 8.0 ) ,A,B & C.
when bootstrapping the node A facing the below error.So i started bootstrapping the nodes B,C
similarly but no luck with the same error.

  1. How do i find the last one(node) to leave the cluster…?

  2. To force cluster bootstrap at any of the 3 nodes(A,B & C) by editing the /var/lib/mysql /grastate.date file.Would it be a data loss if so,how to recover the data…?

  3. Is it safe_to_bootstrap any of the nodes…?

2021-03-02T15:40:32.447442+05:30 0 [Note] [MY-000000] [WSREP] Starting replication
2021-03-02T15:40:32.447548+05:30 0 [Note] [MY-000000] [Galera] Connecting with bootstrap option: 1
2021-03-02T15:40:32.447643+05:30 0 [Note] [MY-000000] [Galera] Setting GCS initial position to 3788d9c7-141e-11eb-969b-da352344b906:536

2021-03-02T15:40:32.447797+05:30 0 [ERROR] [MY-000000] [Galera] It may not be safe to bootstrap the cluster from this node. It was not the last one to leave the cluster and may not contain all the updates. To force cluster bootstrap with this node, edit the grastate.dat file manually and set safe_to_bootstrap to 1 .

2021-03-02T15:40:32.447814+05:30 0 [ERROR] [MY-000000] [WSREP] Provider/Node (gcomm://,, failed to establish connection with cluster (reason: 7)
2021-03-02T15:40:32.447831+05:30 0 [ERROR] [MY-010119] [Server] Aborting
2021-03-02T15:40:32.448007+05:30 0 [System] [MY-010910] [Server] /usr/sbin/mysqld: Shutdown complete (mysqld 8.0.20-11.2)  Percona XtraDB Cluster (GPL), Release rel11, Revision 9132e55, WSREP version 26.4.3.
2021-03-02T15:40:32.448309+05:30 0 [Note] [MY-000000] [Galera] dtor state: CLOSED
2021-03-02T15:40:32.448347+05:30 0 [Note] [MY-000000] [Galera] MemPool(TrxHandleSlave): hit ratio: 0, misses: 0, in use: 0, in pool: 0
2021-03-02T15:40:32.450297+05:30 0 [Note] [MY-000000] [Galera] apply mon: entered 0
2021-03-02T15:40:32.452439+05:30 0 [Note] [MY-000000] [Galera] apply mon: entered 0
2021-03-02T15:40:32.454775+05:30 0 [Note] [MY-000000] [Galera] apply mon: entered 0
2021-03-02T15:40:32.454804+05:30 0 [Note] [MY-000000] [Galera] cert index usage at exit 0
2021-03-02T15:40:32.454812+05:30 0 [Note] [MY-000000] [Galera] cert trx map usage at exit 0
2021-03-02T15:40:32.454819+05:30 0 [Note] [MY-000000] [Galera] deps set usage at exit 0
2021-03-02T15:40:32.454830+05:30 0 [Note] [MY-000000] [Galera] avg deps dist 0
2021-03-02T15:40:32.454839+05:30 0 [Note] [MY-000000] [Galera] avg cert interval 0
2021-03-02T15:40:32.454846+05:30 0 [Note] [MY-000000] [Galera] cert index size 0
2021-03-02T15:40:32.454909+05:30 0 [Note] [MY-000000] [Galera] Service thread queue flushed.
2021-03-02T15:40:32.454935+05:30 0 [Note] [MY-000000] [Galera] wsdb trx map usage 0 conn query map usage 0
2021-03-02T15:40:32.454947+05:30 0 [Note] [MY-000000] [Galera] MemPool(LocalTrxHandle): hit ratio: 0, misses: 0, in use: 0, in pool: 0
2021-03-02T15:40:32.455891+05:30 0 [Note] [MY-000000] [Galera] Flushing memory map to disk...
1 Like
  1. Look at the /var/lib/mysql/grastate.dat file on each none. Whichever node was last to leave will have safe_to_boostrap: 1. That node is the one you should bootstrap.
  2. Restoring backup and re-boostrapping the entire cluster would be the only way to recover.
  3. Check the grastate.dat file. Only 1 should be safe.
1 Like

thanks for the awesome information.