Not the answer you need?
Register and ask your own question!

Bootstrap with Existing UUID

kvigneshskvigneshs ContributorCurrent User Role Novice
Hello All,

Is there is any way to bootstrap the node with existing UUID? I have three node cluster whenever I restart the bootstrap with all node are down, say for eg.

Node 1 - Bootstrap
Node 2
Node 3

Stopping the nodes one by one from Node 3, Node 2 and Node1. When I start Node 1 as Bootstrap it resets the Older UUID, this force the other nodes to Join into this cluster by SST due to UUID is reset to a different one. I have Huge data on DB, can't able to wait for that to transmit those to another node since they are in different Geo Region, Like Node 1 in London, Node 2 in Frankfurt and Node 3 in Hongkong.

Any solution to Start the Bootstrap with the same UUID.


  • kvigneshskvigneshs Contributor Current User Role Novice
    Any solutions please?
  • lorraine.pocklingtonlorraine.pocklington Percona Community Manager Legacy User Role Patron
    There is a blog post here about various scenarios, towards the end there may be situations similar to the one you are facing (with potential solutions).

    As always, test and secure working backups. I have to say that, sorry, and I am sure you know this already.

    If that article doesn't help you out, so that our engineers can understand the issue better and consider any known issues arising from your environment, could you provide more details about your environment particularly:
    • Percona XtraDB Cluster version
    • environment details
    • error logs if any
    • my.cnf files
    • any other relevant information that might help them understand what you have or haven't tried
  • kvigneshskvigneshs Contributor Current User Role Novice
    Hello Lorraine,

    Thanks for the reply and email,

    The version of Percona XtraDB Cluster is 5.7.21
    Node 1- Bootstrap in London
    Node 2 - Joiner node in Hongkong
    Node 3 - Joiner node in Frankfurt

    Configuration Files: /etc/my.cnf

    # The Percona XtraDB Cluster 5.7 configuration file.
    # * IMPORTANT: Additional settings that can override those from this file!
    # The files must end with '.cnf', otherwise they'll be ignored.
    # Please make any edits and changes to the appropriate sectional files
    # included below.
    !includedir /etc/my.cnf.d/
    !includedir /etc/percona-xtradb-cluster.conf.d/


    # Template my.cnf for PXC
    # Edit to your requirements.

    innodb_buffer_pool_size = 6G
    innodb_log_file_size = 2G
    #innodb_log_file_size = 50331648
    innodb_flush_log_at_trx_commit = 1
    innodb_flush_method = O_DIRECT
    innodb_read_io_threads = 16
    innodb_write_io_threads = 16
    innodb_io_capacity = 3000
    innodb_io_capacity_max = 6000

    # Disabling symbolic-links is recommended to prevent assorted security risks


    # Path to Galera library

    # Cluster connection URL contains IPs of nodes
    #If no IP is found, this implies that a new cluster needs to be created,
    #in order to do that you need to bootstrap this node

    # In order for Galera to work correctly binlog format should be ROW


    # MyISAM storage engine has only experimental support

    # Slave thread to use


    # This changes how InnoDB autoincrement locks are managed and is a requirement for Galera

    # Node IP address
    # Cluster name

    #If wsrep_node_name is not specified, then system hostname will be used

    #pxc_strict_mode allowed values: DISABLED,PERMISSIVE,ENFORCING,MASTER

    # SST method

    #Authentication for SST method


    This is the config in all nodes. Whenever i restart the cluster the UUID changes with different one, this resets the joiner node UUID leads to SST.
  • krunalbauskarkrunalbauskar Advisor Inactive User Role Novice
    Normally UUID will not change on restart.

    Take a look at this example here:

    node-2 boot for first time:
    Group state: e8e50be9-7f61-11e8-ba49-0af85422503a:0
    Local state: 00000000-0000-0000-0000-000000000000:-1

    node-2 on shutdown:
    2018-07-04T08:12:17.351541Z 3 [Note] WSREP: New cluster view: global state: e8e50be9-7f61-11e8-ba49-0af85422503a:1, view# -1: non-Primary, number of nodes: 0, my index: -1, protocol version 3

    node-1 on shutdown:
    2018-07-04T08:12:43.627751Z 2 [Note] WSREP: New cluster view: global state: e8e50be9-7f61-11e8-ba49-0af85422503a:5784, view# -1: non-Primary, number of nodes: 0, my index: -1, protocol version 3

    node-2 on restart post node1 was restarted:
    2018-07-04T08:13:43.005469Z 0 [Note] WSREP: Shifting OPEN -> PRIMARY (TO: 5784)
    2018-07-04T08:13:43.005681Z 2 [Note] WSREP: State transfer required:
    Group state: e8e50be9-7f61-11e8-ba49-0af85422503a:5784
    Local state: e8e50be9-7f61-11e8-ba49-0af85422503a:1

    2018-07-04T08:13:43.110533Z 2 [Note] WSREP: IST receiver addr using tcp://
    2018-07-04T08:13:43.110926Z 2 [Note] WSREP: Prepared IST receiver, listening at: tcp://

    node-2 is retaining the same uuid and so there is no need of SSt.


    * When you are starting node-2 is data-directory being wiped off and fresh seed is being used.
    * You can also check node-2 uuid by just running ./bin/mysqld ... my.cnf....--wsrep_recover this will not start the server but will emit the cached wsrep-position.
  • kvigneshskvigneshs Contributor Current User Role Novice
    Hi krunalbauskar,

    Thanks for sharing, In my case every time the UUID was resting when cluster goes down (Not even a single Node is running).
Sign In or Register to comment.

MySQL, InnoDB, MariaDB and MongoDB are trademarks of their respective owners.
Copyright ©2005 - 2020 Percona LLC. All rights reserved.