Mysql group replication

Hi Team

My ibd files were removed automatically so i had to recover the cluster with logical backup

when i start group replication it throws me below error

2023-01-22T15:06:47.167206+05:30 145 [Warning] [MY-011735] [Repl] Plugin group_replication reported: ‘[GCS] Automatically adding IPv6 localhost address to the whitelist. It is mandatory that it is added.’
2023-01-22T15:06:47.169021+05:30 356 [System] [MY-010597] [Repl] ‘CHANGE MASTER TO FOR CHANNEL ‘group_replication_applier’ executed’. Previous state master_host=‘’, master_port= 0, master_log_file=‘’, master_log_pos= 431, master_bind=‘’. New state master_host=‘’, master_port= 0, master_log_file=‘’, master_log_pos= 4, master_bind=‘’.
2023-01-22T15:06:50.454355+05:30 0 [ERROR] [MY-011526] [Repl] Plugin group_replication reported: ‘This member has more executed transactions than those present in the group. Local transactions: c345d6de-edf5-4bf6-8f3d-21846d2ac972:1-893385170,
d1099682-ff02-11e9-a422-000c29f7ac96:1-43781661,
d8f595ef-ff02-11e9-9a60-000c2943c2f5:1-2195534 > Group transactions: c345d6de-edf5-4bf6-8f3d-21846d2ac972:1-896684455,
d1099682-ff02-11e9-a422-000c29f7ac96:1-43781661’
2023-01-22T15:06:50.454431+05:30 0 [ERROR] [MY-011522] [Repl] Plugin group_replication reported: ‘The member contains transactions not present in the group. The member will now exit the group.’
2023-01-22T15:06:50.454509+05:30 0 [ERROR] [MY-011486] [Repl] Plugin group_replication reported: ‘Message received while the plugin is not ready, message discarded.’
2023-01-22T15:06:50.454527+05:30 0 [ERROR] [MY-011486] [Repl] Plugin group_replication reported: 'Message received while the plugin is not ready, message dis

1 Like

HI @simerpreet18

Thanks for reaching out.

From above provided logs, I understand that you are using group replication in one of the version of MySQL with atleast 3 nodes. Your ibd files on one of your node went missing and thus you restored the backup from your other nodes with the help of logical backup. Post restoring, now you are facing above mentioned error “member has more transaction”. Please correct if my understanding is wrong.

Can you please provide what version of MySQL are you using, is it percona’s mysql or community, how many number of nodes your GR has, are they in single primary mode or multi primary mode . Can you also provide the value of gtid_executed from all the nodes

Now coming back to the error which means that this node has transactions which are unknown to the group and thus it is not allowing this node to enter into the group. It seems that you have written more transactions on the restoring node by either switching the log_Slave_updates to ON or with super_read_only to OFF.

I would rather suggest you to rebuild this node with proper backup while switching off these 2 variables so that your node can become consistent with other. In case you are sure that inconsistent node will also work for your situation, then it is good to insert the empty transaction in the other online node and start group replication on this failing node. Below is the example (depending on your situation):

set gtid_next=‘c345d6de-edf5-4bf6-8f3d-21846d2ac972:1’;
begin;
commit;
set gtid_next=‘c345d6de-edf5-4bf6-8f3d-21846d2ac972:2’;
begin;
commit;
set gtid_next=‘automatic’;

But remember, this will make your node inconsistent with other nodes in GR.

1 Like

Hi @simerpreet18

Could you tell us the exact process you executed to perform the logical copy, restore it and configure replication? If you do not provide these steps, we must guess what you did.

I suspect that d8f595ef-ff02-11e9-9a60-000c2943c2f5:1-2195534 was generated during the restore process.

Thank you!

Pep

Hello ankit

Please update

I have restored full backup from primary and now need to sync with primary as getting error transactions are not present

Hi @Simerpreet_Singh ,

Thanks for your response.

As my colleague Pep suggested that with the limited information we have ( the exact steps you followed), we must guess what you did. Please provide all the steps you did.

In case you have taken the complete backup ( either using physical or logical ) and restored it on the slave/replica, you must be having gtid_purged set as this will be needed to set it on the slave. This value must be visible in your backup. This value represents all the transactions which has been executed on the master ( tx which are in your backup file ) and slave need this value to understand that until this position, all transactions has been written.

Im not sure what do you mean by error “transactions not present”. I request you to share full error.

If you have followed all the steps correctly then you must register the master on the slave using below steps :

CHANGE MASTER TO
MASTER_HOST=‘master_IP’ ,
MASTER_USER=‘user_to_connect’, MASTER_PASSWORD=‘******’, MASTER_AUTO_POSITION=1;

and restart the group replication on slave.

1 Like

Thank you very much ankit kapoor

I restored the DB and after these workaround shared master starting catching up and all my 3 primary cluster are in sync now

Thank you for your support

Appreciate it