I have galera cluster with 3 nodes. Version is 5.7.41-44-57-log. Have disabled wsrep_slave_FK_checks variable, to skip checking fks while committing changes into slave nodes. I’m still receiving errors in log like this :
[ERROR] Slave SQL: Could not execute Write_rows event on table; Cannot add or update child row: a foreign key constraint fails. Error_code: 1452; handler error HA_ERR_NO_REFERENCED_ROW; the event’s master log FIRST

It’s trying several times and then kills the service and whole cluster.

What can be done? Is it a bug ?

Hello @Ani_Ghoghoberidze,

Usually when PXC finds an inconsistent node, it evicts it from cluster, restarts and perform an SST. This is how it should really bring it back in the cluster letting it join and without having data discrepancies due to fresh restore.

  • Isn’t the node doing SST after crash?
  • Can you please share the table definition?
  • Can you share configuration value for wsrep_slave_threads?

Before calling it a bug, let’s verify a few things:

  • Are the crashes repeated?
  • Even after SST?
  • Is that the same node crashing?
  • Are you writing to multiple node?
  • Are we using something that Galera’s known limitation? For example is that table MyISAM?
  • Is there specific process/set of queries you run before running into this error?


@kedarpercona Hi.

  • Yes, recovery goes through SST after crash;
  • here’s the create statement of 2 tables, on which I’m getting an error:
  `id` int(10) unsigned NOT NULL AUTO_INCREMENT,
  `c_gid` binary(16) NOT NULL,
  `p_gid` binary(16) NOT NULL,
  `created_at` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP,
  PRIMARY KEY (`id`),
  UNIQUE KEY `id_UNIQUE` (`id`),
  UNIQUE KEY `ccp_pk` (`c_gid`,`p_gid`),
  KEY `ccpcc_fk` (`c_gid`),
  KEY `ccpcp_fk` (`p_gid`),

  `id` int(10) unsigned NOT NULL AUTO_INCREMENT,
  `gid` binary(16) NOT NULL,
  `definition_id` int(11) DEFAULT NULL,
  `name` varchar(160) DEFAULT NULL,
  `created_at` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP,
  PRIMARY KEY (`id`),
  UNIQUE KEY `gid_UNIQUE` (`gid`),
  KEY `definition_id` (`definition_id`),
  CONSTRAINT `cpdp_fk` FOREIGN KEY (`definition_id`) REFERENCES `dp` (`id`)

  • wsrep_slave_threads value is 16.

  • Yes, the crashes are repeating from time to time. Writing on master first. I have an application, which is compatible with single node, but when it comes to cluster, while writing data into another 2 nodes throws that error which I wrote above. The error is on both slave nodes at the same time and they’re both crashing. Don’t have any limitations, there are all InnoDB.

Hi @Ani_Ghoghoberidze,

About this,

“I have an application, which is compatible with single node, but when it comes to cluster, while writing data into another 2 nodes throws that error which I wrote above”

  • so, the application is writing on all the 3 nodes? Usually
  • Writing on multiple node on PXC is not a good practice, and not really going to help in scaling writes as well.
  • If the application writes on single node, do you see this issue? If not, I recommend you to follow the best practices and write on single cluster node.


Hi @Ani_Ghoghoberidze,

Is the application updating the related tables in the same transaction? If not, I recommend encapsulating both changes in the same transaction. Probably the parallel replicators don’t see the relationship between both transactions and consider that different threads can apply them.

Things that could fix this:

  • Make sure that the inserts are performed in the same transaction.
  • Increase the value of wsrep_retry_autocommit to allow the second transaction to retry if it is executed before the first.
  • Reduce the number of parallel threads to one.
  • Disable foreign keys completely, either by dropping them or setting foreign_key_checks=0 on all the nodes.

Please keep us updated with your tests.



@Pep_Pla Hello

Application is writing into master node only, without any problem. Error occurres while writesets are applied to the other 2 nodes.

With wsrep_slave_threads=1 won’t I have a problem with flow control?

and why wsrep_slave_FK_checks variable does not work as expected?

@Pep_Pla @kedarpercona I think the main question here is: Why wsrep_slave_FK_checks = OFF has no effect?
Theoretically it should disable the checking of the FKs, so HA_ERR_NO_REFERENCED_ROW should not be generated at all. Or maybe we are missing something?

Hi @Perica_Jakimov and welcome to Percona Community.

Thanks for pointing at the concern and I seem to agree with you. Theoretically, the applier threads should ignore the errors related to FK upon wsrep_slave_FK_checks=OFF.

One thing I can think of is since this is a dynamic variable, it is possible that the applier threads have not picked up the config change (assuming you did set global). Can we make the change and do a rolling restart to ensure the change applies to all applier threads and see if the issue repeats.

If this doesn’t help, I’d go ahead and raise a bug-report with whatever detailed information you can provide.



All of 3 nodes were restarted several times already and when selecting variable, value is 0.

@Ani_Ghoghoberidze @Perica_Jakimov

I’ve been looking at the code to see if something has changed recently, and I don’t find any change.

These are the contents of ./percona-xtradb-cluster/sql/

  if (wsrep_slave_FK_checks == FALSE) 
    thd->variables.option_bits|= OPTION_NO_FOREIGN_KEY_CHECKS;
    thd->variables.option_bits&= ~OPTION_NO_FOREIGN_KEY_CHECKS;

This code is nine years old :confused:

My recommendation is that you fill in a bug. Maybe the parallel applier overrides these values somewhere, but I’m not an engineering team member, so I can’t tell you.

Reducing parallelism can have a performance impact and trigger flow control. But just if you really need these 16 parallel applies. I would reduce the number of parallel appliers if you don’t need them.

Did you check if the commands that trigger this situation are part of different logical transactions or the same? If they are part of the same process, I would wrap them with begin and commit, and I would also test the wsrep_retry_autocommit parameter if you can’t change the code.

I am not very optimistic about the wsrep_retry_autocommit parameter because it involves the certification stage, and the changes were already certificated when the applier tried to apply them.

also in addition to Pep’s comment above,

  • It’d be good to provide repeatable use-case for the issue you’re seeing. Sample queries / tables and sequence of steps to generate the issue.
  • I’d also recommend writing on a single node only and if needed, split the read between all nodes. PXC is not a write scaling solution. I believe doing so should solve your issue temporarily while we identify the root cause.


@kedarpercona @Pep_Pla thanks for your comments.
The writes are always on 1 node only, and the problem is always on the other 2 (applier nodes).
That’s why we think this variable is not doing it’s job.
I don’t think that wsrep_retry_autocommit will solve the issue because it works on the certification stage, and we are passing that stage without problem.
The queries are hitting the Writer node in the right order (ex. INSERT INTO parent, INSERT into child, …) but when all of them are passed to the applier nodes in some situations we get a race-condition and some thread that wants to apply the child records wins the race so the FK fails.
We are sure that we have data consistency because it was checked on the Writer node, so all we want is the other 2 nodes to make a copy of the same dataset.
That is why we think that wsrep-slave-fk-checks is not doing it’s job, and if that works we should be OK.
Also, we can not reproduce this because it happens unexpected. Most of the time the same set of queries will apply without problem, sometimes we can see that some node will log an error → will retry to apply the record and meanwhile the parent record succeeded so it will succeed also, and sometimes it retries for 4 times → reports that the node has inconsistent dataset and the node kicks of from the cluster. And if it is only 1 node it is not a big deal, we will rejoin it. The real problem is when this happens on the 2 applier nodes at about the same time, so we have 2 nodes out of the cluster and we have the whole cluster down - resulting in downtime.
The other problem is that the queries are generated from multiple app threads and the DEV team says they can not be packed in a logical SQL transaction. I’m sure that if they pack them corectly we will not have the problem because the whole transaction will be applied by just 1 applier thread, but it is hard to me to fight this :slight_smile:

We will open a bug and see what would be the outcome.

Thanks again.

1 Like

Did you try seting foreign_key_checks to 0 globally on all (or some) of the replicas? If the replicas are not processing writes, it should not affect.

If a replica is promoted to master, then re-enable them.

Important to change the value in the my.cnf otherwise sessions running prior to the change will not see it.

We’re talking about other 2 members of the cluster, not replica.

We have 3 nodes cluster - 1 master and 2 slaves. We’re always writing to single node (master) and then apply the threads to another 2 nodes, where exactly we’re receiving errors from time to time and then both of them crashes.

If we disable foreign checks globally, someone will mess the situation.

I mean disabling the foreign key checks only in the members that do not process direct writes.


We can’t turn them off on other 2 nodes, because we’re using proxysql and if master fails, connections and writes will be redirected to one of the slave nodes.

About that variable, should we open a bug or will you do it?

Hi @Ani_Ghoghoberidze

I’m sorry, but we don’t open bugs on behalf of the users. You’ll have to do it.


@Pep_Pla @kedarpercona

Thank you for your involvement.

1 Like

Opened a bug

1 Like