In order to use a heartbeat/monitoring feature of a popular load balancer, I implemented a table called “db_status” with single column “status_cd” and single row, either “up” or “down”. But the replication scheme defeats the purpose of this table since it propagates the value to the other nodes of the cluster. I’ve attempted using the standard MySQL “replicate-ignore-table” and “replicate-wild-ignore-table” in my.cnf:
replicate-wild-ignore-table=mysql.db_status
or:
replicate-ignore-table=mysql.db_status
But it doesn’t work. The table’s “up” or “down” value is still transmitted.
Is there a way to filter out tables while using Percona Xtradb Cluster?
You could probably solve this by using a storage engine that Galera doesn’t support, or MyISAM with the replicate MyISAM option disabled.
There are however a number of well documented ways to achieve this kind of load balancer interoperability, such as the “clustercheck” script that is bundled with XtraDB Cluster (and the corresponding “mysqlchk” xinetd config that is also bundled). This script also does not just check server availability but also that the server has synced with the cluster, so should protect against issues that could occur if you were to write to a server that wasn’t synced, etc. This should work with a wide range of load balancers as it’s just a simple HTTP check, similar as you might do to load balance a web server. I personally used this for a couple of years and it worked flawlessly.
There are also specialist SQL load balancers available, such as MaxScale, which in addition to node status checks can also route different types of queries to different nodes (read/write split) and a variety of other advanced functions.