Bootstrap with Existing UUID

Hello All,

Is there is any way to bootstrap the node with existing UUID? I have three node cluster whenever I restart the bootstrap with all node are down, say for eg.

Node 1 - Bootstrap
Node 2
Node 3

Stopping the nodes one by one from Node 3, Node 2 and Node1. When I start Node 1 as Bootstrap it resets the Older UUID, this force the other nodes to Join into this cluster by SST due to UUID is reset to a different one. I have Huge data on DB, can’t able to wait for that to transmit those to another node since they are in different Geo Region, Like Node 1 in London, Node 2 in Frankfurt and Node 3 in Hongkong.

Any solution to Start the Bootstrap with the same UUID.

Any solutions please?

There is a blog post here about various scenarios, towards the end there may be situations similar to the one you are facing (with potential solutions).

[URL=“Galera replication – how to recover a PXC cluster”]https://www.percona.com/blog/2014/09...a-pxc-cluster/[/URL]

As always, test and secure working backups. I have to say that, sorry, and I am sure you know this already.

If that article doesn’t help you out, so that our engineers can understand the issue better and consider any known issues arising from your environment, could you provide more details about your environment particularly:
[LIST]
[]Percona XtraDB Cluster version
[
]environment details
[]error logs if any
[
]my.cnf files
[*]any other relevant information that might help them understand what you have or haven’t tried
[/LIST] Thanks.

Hello Lorraine,

Thanks for the reply and email,

The version of Percona XtraDB Cluster is 5.7.21
Node 1- Bootstrap in London
Node 2 - Joiner node in Hongkong
Node 3 - Joiner node in Frankfurt

Configuration Files: /etc/my.cnf

The Percona XtraDB Cluster 5.7 configuration file.

* IMPORTANT: Additional settings that can override those from this file!

The files must end with ‘.cnf’, otherwise they’ll be ignored.

Please make any edits and changes to the appropriate sectional files

included below.

!includedir /etc/my.cnf.d/
!includedir /etc/percona-xtradb-cluster.conf.d/

/etc/percona-xtradb-cluster.conf.d/mysqld.cnf

Template my.cnf for PXC

Edit to your requirements.

[client]
socket=/var/lib/mysql/mysql.sock

[mysqld]
server-id=1
datadir=/var/lib/mysql
socket=/var/lib/mysql/mysql.sock
log-error=/var/log/mysql/mysql-error.log
slow-query-log-file=/var/log/mysql/mysql-slow.log
pid-file=/var/run/mysqld/mysqld.pid
log-bin
log_slave_updates
expire_logs_days=7
sql_mode=‘’
innodb_buffer_pool_size = 6G
innodb_log_file_size = 2G
#innodb_log_file_size = 50331648
innodb_flush_log_at_trx_commit = 1
innodb_flush_method = O_DIRECT
innodb_read_io_threads = 16
innodb_write_io_threads = 16
innodb_io_capacity = 3000
innodb_io_capacity_max = 6000
#innodb_force_recovery=6
#innodb_force_recovery=3
#innodb_purge_threads=0

Disabling symbolic-links is recommended to prevent assorted security risks

symbolic-links=0

/etc/percona-xtradb-cluster.conf.d/wsrep.cnf

[mysqld]

Path to Galera library

wsrep_provider=/usr/lib64/galera3/libgalera_smm.so

Cluster connection URL contains IPs of nodes

#If no IP is found, this implies that a new cluster needs to be created,
#in order to do that you need to bootstrap this node
wsrep_cluster_address=gcomm://node1-ip,node2-ip,node3-ip

In order for Galera to work correctly binlog format should be ROW

binlog_format=ROW

wsrep_provider_options=“gcache.size=7G;gcache.page_size=3G;gcache.recover=yes”

MyISAM storage engine has only experimental support

default_storage_engine=InnoDB

Slave thread to use

wsrep_slave_threads=16

wsrep_log_conflicts=ON

This changes how InnoDB autoincrement locks are managed and is a requirement for Galera

innodb_autoinc_lock_mode=2

Node IP address

wsrep_node_address=node1-ip

Cluster name

wsrep_cluster_name=cluster-new

#If wsrep_node_name is not specified, then system hostname will be used
wsrep_node_name=node1cluster

#pxc_strict_mode allowed values: DISABLED,PERMISSIVE,ENFORCING,MASTER
pxc_strict_mode=DISABLED

SST method

wsrep_sst_method=xtrabackup-v2

#Authentication for SST method
wsrep_sst_auth=“user:pass”

max_connections=999999
max_connect_errors=999999

This is the config in all nodes. Whenever i restart the cluster the UUID changes with different one, this resets the joiner node UUID leads to SST.

Normally UUID will not change on restart.

Take a look at this example here:

node-2 boot for first time: Group state: e8e50be9-7f61-11e8-ba49-0af85422503a:0 Local state: 00000000-0000-0000-0000-000000000000:-1

node-2 on shutdown:
2018-07-04T08:12:17.351541Z 3 [Note] WSREP: New cluster view: global state: e8e50be9-7f61-11e8-ba49-0af85422503a:1, view# -1: non-Primary, number of nodes: 0, my index: -1, protocol version 3

node-1 on shutdown:
2018-07-04T08:12:43.627751Z 2 [Note] WSREP: New cluster view: global state: e8e50be9-7f61-11e8-ba49-0af85422503a:5784, view# -1: non-Primary, number of nodes: 0, my index: -1, protocol version 3

node-2 on restart post node1 was restarted:
2018-07-04T08:13:43.005469Z 0 [Note] WSREP: Shifting OPEN → PRIMARY (TO: 5784)
2018-07-04T08:13:43.005681Z 2 [Note] WSREP: State transfer required:
Group state: e8e50be9-7f61-11e8-ba49-0af85422503a:5784
Local state: e8e50be9-7f61-11e8-ba49-0af85422503a:1

2018-07-04T08:13:43.110533Z 2 [Note] WSREP: IST receiver addr using tcp://192.168.1.8:5031
2018-07-04T08:13:43.110926Z 2 [Note] WSREP: Prepared IST receiver, listening at: tcp://192.168.1.8:5031

node-2 is retaining the same uuid and so there is no need of SSt.

  • When you are starting node-2 is data-directory being wiped off and fresh seed is being used.
  • You can also check node-2 uuid by just running ./bin/mysqld … my.cnf…–wsrep_recover this will not start the server but will emit the cached wsrep-position.

Hi krunalbauskar,

Thanks for sharing, In my case every time the UUID was resting when cluster goes down (Not even a single Node is running).