Xtradb slave setup from innobackupex backup not working

Hello, I made a backup and applies logs using innobackupex, which I’ve done many times. What’s new for me in this environment is the addition of GTID’s, which were enabled last week 100% and all binlogs have been purged that has old txid’s in them prior to taking the backup. That being said, I followed this guide: https://www.percona.com/doc/percona-xtrabackup/2.4/howtos/recipes_ibkx_gtid.html

When I try to change the gtid_purged I get the error: ERROR 1840 (HY000): @@GLOBAL.GTID_PURGED can only be set when @@GLOBAL.GTID_EXECUTED is empty.
and of course gtid_executed is a read-only var.

Trying the unrecommended “reset master” doesn’t work either because xtradb sees it as a cluster node (even though not using galera/xtradb features).
ERROR 1148 (42000): RESET MASTER not allowed when node is in cluster

When I try starting the slave, I’m getting the obvious dupkey error because I can’t set the gtid_purged var and the gtid_executed var is incorrect.

I would just try another backup but this is an 8TB database and it takes 4 days to run a backup + apply logs + rsync to the new server (yes i could stream it too)

What’s the best way to resolve this?


Please note that RESET MASTER in PXC cluster is considered as unsafe, as normally it would lead to inconsistent GTIDs between the nodes.
Even though GTID are not required by Galera, it matters when async replication is put into the mix.
This is why it is not allowed since 5.7.18: https://jira.percona.com/browse/PXC-830

You can still override it by disabling Galera replication for your session:

node2 > reset master;
ERROR 1148 (42000): RESET MASTER not allowed when node is in cluster

node2 > set wsrep_on=0;
Query OK, 0 rows affected (0.00 sec)

node2 > reset master;
Query OK, 0 rows affected (0.01 sec)

node2 > set wsrep_on=1;
Query OK, 0 rows affected (0.00 sec)

node2 > show global variables like 'gtid%';
| Variable_name | Value |
| gtid_executed | |
| gtid_executed_compression_period | 1000 |
| gtid_mode | ON |
| gtid_owned | |
| gtid_purged | |
5 rows in set (0.01 sec)

node2 > set global gtid_purged="00016681-1111-1111-1111-111111111111:1-15378,
Query OK, 0 rows affected (0.00 sec)

node2 > select @@GLOBAL.gtid_executed\G
*************************** 1. row ***************************
@@GLOBAL.gtid_executed: 00016681-1111-1111-1111-111111111111:1-15378,
1 row in set (0.00 sec)

Do that carefully though - all writes must be stopped before you are trying to that on a running cluster, otherwise GTID will be inconsistent between the cluster members.

Hi przemek thanks for the reply. If we’re not using any xtradb cluster/galera features and just doing standard master/master replication, could I just leave wsrep off? If we wanted to move to a full galera cluster in the future, could that be done with little to no downtime on the same hardware?

If you are not using Galera replication now, yes, wsrep provider should be left disabled.
Migration to PXC may be done with no downtime if planned properly. I suggest checking this webinar: https://www.percona.com/resources/mysql-webinars/migrating-percona-xtradb-cluster