There is a single node version 8.x
Next created pxc8.x
create backup from single node
first node in pxc8 uploaded to bootstrap
added 2 remaining nodes to the cluster
It seems that everything is ok
But you need to finish the cluster with a single node
I add(try on gdit and binlog) it and start slave and the node does not sync anything(increase Seconds_Behind_Master), the commands cannot be executed, only the restart of the service helps
When restart node, view log:
Slave SQL for channel ‘’: Worker 1 failed executing transaction ‘****’ at master log mysql-bin.000011, end_log_pos 108802; Error ‘WSREP has not yet prepared node for application use’ on query. Default database: '’. Query: ‘BEGIN’, Error_code: MY-001047
Error running query, slave SQL thread aborted. Fix the problem, and restart the slave SQL thread with “SLAVE START”. We stopped at log ‘mysql-bin.000011’ position 108633
You are mixing two different replication styles. PXC does not use traditional replication. PXC uses the Galera communications library to manage transaction replication.
If you were successful in getting all 3 nodes connected together, then YAY you are done! That is your cluster. They will communicate and sync over tcp/4444 and share transactions via this port. There is nothing else to configure.
You should read up on PXC 101 Basics. PXC has built-in, native synchronization. When you bring node2 online, it will contact another member of the cluster. If node2 doesn’t have the same data as the other node, then node2 will automatically receive a copy of the entire dataset. This is known as SST.
Shut down the new node. Erase the datadir completely and then start MySQL. This new node will automatically get a copy of the entire dataset from the existing node.
You misunderstood me
I have 1 working single node 8.x
We set the task to transfer this node to the cluster.
I took a backup from this single node and created a cluster with 3 nodes.
Now, in order to switch to the cluster, you need to synchronise it with the working single node.
When I run start slave on one of the cluster nodes, that node crashes
Yes, I do that, I use gtid to set up replication with a single node on one of the nodes
But the problem is that when I do a start slave, the node crashes from the cluster and only reboot helps.
This is where the problem is
Well, I would start your cluster over fresh because it looks like you have some sort of data mismatch that is causing the cluster to vote to expel that member when replication starts.
Blow away the cluster, bootstrap 1 node, configure async replication to this node. Verify this works, then start node2, wait for SST, verify replicated events received by node1 are going to node2. Then start node3.
Do I understand correctly what to run on the first node in bootstrap mode, slave replication without adding other nodes to it?
and after the first node syncs with single node, connect the second and third?
Do I understand correctly what to run on the first node in bootstrap mode, slave replication without adding other nodes to it? and after the first node syncs with single node, connect the second and third?
Thanks for the help, I will think that is not so.
I don’t understand why when I comment lines in my.cnf (wsrep_) on PXC1 and start the node without cluster mode, replication works fine with mysql1
And as soon as in cluster mode and it doesn’t matter in bootstrap or already in normal mode, replication starts(Slave_IO_Running: Yes/Slave_SQL_Running: Yes) up but lags behind and the node crashes from the cluster…
The problem, as for me, is hidden in the cluster mode, but there are not so many settings: