Best practices for 3-node HA server setup with automatic failover (ProxySQL Cluster + XtraDB Cluster 8.0)

I have three identical servers in terms of hardware and OS setup (Ubuntu 22.04). Each of these servers/nodes are running my Java application and each are also a PXC node. Following best practices for PXC, I want one node to receive read/write queries from my Java app instances and the other two to receive read queries.

Instead of coding the database selection in my Java application and manually handling failures, it seems like a good idea to put ProxySQL in front of the PXC nodes and let it handle read/write segmentation and failover. To keep satisfactory high availability, I must install ProxySQL on each machine and configure the Java app to connect to the local ProxySQL instance instead of the local PXC instance.

To achieve this, I assume I must set up the ProxySQL instances to operate in a cluster, so that the ProxySQL instances can automatically agree on which PXC node is the new writer, in case a PXC writer node is taken down. But in my case, with three equal nodes, should I set them all as “Core” nodes of the ProxySQL cluster?

For the configuration of the ProxySQL instances, I assume I can follow your Load Balance with ProxySQL guide to set up load balancing on each node, but the next steps are a bit confusing to me and I found the information at ProxySQL Cluster - ProxySQL somewhat difficult to follow.

Question:
How do I set the ProxySQL instances to communicate in a cluster and agree on one PXC node for read/write and the other two for reads, while also agreeing on which PXC node to “promote” in case the read/write node goes down?

I looked at Percona Scheduler Admin and the ProxySQL Admin, which both seemed relevant, but I couldn’t figure out how to use them for my use case.

You don’t need to use ProxySQL clustering for that. Each ProxySQL will deterministically come to the same conclusion as to who the writer node is in PXC. This is done based off MySQL’s read_only flag. If this is set on node N, then that node isn’t chosen to be writer.

If you don’t want to use/manage the read_only flag, then yes, you’d need to cluster up the ProxySQL’s but understand that ProxySQL’s clustering is extremely simple source/replica and nothing something more complex like raft consensus.

They won’t. ProxySQL does not have this type of consensus/voting mechanism. You will have to make one proxysql “leader” and it will pick the new PXC source. When it changes the mysql_servers/mysql_hostgroups table, the other proxy will see that change and update themselves to match.

Thank you for the quick reply! I think I get it now;

So, Let’s say that I set up a ProxySQL instance on each of the three nodes (without ProxySQL clustering) and configure node 1 as the reader/writer and node 2 and 3 as readers.

Then I shut down the writer node 1. How will the two remaining ProxySQL instances (originally both reader nodes) know which one will become the writer node? Is that something I configure by assigning weights and groups - and repeat that configuration across all ProxySQL instances? And if configured correctly, will I then have automatic failover (not have to manually reconfigure ProxySQL)?

Yep. You can put node1 as weight 10000000, then node2 as 1000000, and node3 as 1. Make sure you’ve configured single_writer in proxysql too.

1 Like

Thank you. I’m still somewhat confused…

Here are my current steps to achieve 3-node HA with automatic failover (single writer):

  1. Install percona-xtradb-cluster on my three Ubuntu servers, set PXC Strict Mode to MASTER and wsrep_auto_increment_control to OFF (since write operations will be isolated to a single node) and verify that replication is working after bootstrapping and exchanging certificates.
  2. Set wsrep_sync_wait to 1 to ensure that statements is executed on a node that is fully synced with the cluster.
  3. Install ProxySQL v2 on all three servers and add the PXC cluster nodes to mysql_servers in two hostgroups, one for reads and one for writes, where group 10 is for writes and 192.168.1.130 is the writer with the highest weight.
+--------------+---------------+------+-----------+--------+--------+-------------+-----------------+---------------------+---------+----------------+---------+
| hostgroup_id | hostname      | port | gtid_port | status | weight | compression | max_connections | max_replication_lag | use_ssl | max_latency_ms | comment |
+--------------+---------------+------+-----------+--------+--------+-------------+-----------------+---------------------+---------+----------------+---------+
| 10           | 192.168.1.130 | 3306 | 0         | ONLINE | 100    | 0           | 1000            | 0                   | 0       | 0              |         |
| 10           | 192.168.1.129 | 3306 | 0         | ONLINE | 90     | 0           | 1000            | 0                   | 0       | 0              |         |
| 10           | 192.168.1.128 | 3306 | 0         | ONLINE | 80     | 0           | 1000            | 0                   | 0       | 0              |         |
| 20           | 192.168.1.129 | 3306 | 0         | ONLINE | 100    | 0           | 1000            | 0                   | 0       | 0              |         |
| 20           | 192.168.1.128 | 3306 | 0         | ONLINE | 100    | 0           | 1000            | 0                   | 0       | 0              |         |
+--------------+---------------+------+-----------+--------+--------+-------------+-----------------+---------------------+---------+----------------+---------+
  1. Apply the ProxySQL query rules for read/write split:
INSERT INTO mysql_query_rules (rule_id, active, match_pattern, destination_hostgroup, apply) VALUES (1, 1, '^(INSERT|UPDATE|DELETE|REPLACE|CREATE|DROP|ALTER|TRUNCATE)', 10, 1); -- write queries

INSERT INTO mysql_query_rules (rule_id, active, match_pattern, destination_hostgroup, apply) VALUES (2, 1, '^SELECT', 20, 1); -- read queries

Lastly, I would set up a monitoring user on one node and create a proxySQL client user for my Java App

I am not setting up the three ProxySQL instances in a cluster, but with the above identical configurations (weight difference in hostgroup 10) they would deterministically select the correct backup writer if the main (highest weight) writer goes down, right?

I am not using the MySQL read_only flag on the two initial readers, since that would require manual intervention if the writer node went down, but my query rules and hostgroup/weight setup should ensure I only have one writer node at any given time, right?

You mention

Make sure you’ve configured single_writer in proxysql too

Is that what I’ve done above, or is there a “single_writer” mode I need to set somewhere? Anything else I need to configure?

You should read this blog,

This would explain you how to setup the Percona Xtradb cluster. Its same.

Sorry, I assumed you were using the mysql_galera_hostgroups table to handle this. Read the blog Yunus posted. :slight_smile: