Mysql scaling limitations

I was checking benefits that Percona can provide, and would like to clarify something.

In documentation i looked at allowUnsafeConfigurations key description and it is saying that i can not start cluster with less then 3 and more then 5 XtdaDB Cluster instances if it is set to false. It does not saying anything about scaling on runtime. Does it mean that it is fine to scale down instances to less then 3, and scale up to more then 5 on the run time? Is there are limit for number of instances where Percona can not handle it anymore, and it can lead to data loss? Why is it unsafe to have different number of instances the 3 to 5?

Hi @paul_sinke thanks for posting to the Percona forums!

The limitations are based on the concept of quorum, whereby having less than 3 nodes means you could suffer from a split brain scenario, which is undesirable as then you could potentially introduce data differences into your cluster. For example a cluster of two nodes could be partitioned due to network and each would have equivalent voting, so quorum could not be achieved, and thus neither know if they are still forming the Primary Component or not.

For users who simply don’t need the redundancy of three full nodes (usually users don’t want to spend the resources for the datadir space) there is the garbd arbitrator which provides for that additional voting node so that quorum is achieved in a 3 node cluster. I don’t believe the garbd is yet supported by our Kubernetes Operator for MySQL however if you are running this on compute or bare-metal you can certainly deploy garbd to ensure you have an odd number of nodes and can achieve quorum.


Hello @Michael_Coburn thank you for your response. Now it is clear why do I need at least 3 instances, but still would like to know if 5 instances is the limit of cluster? What will happen if I will spawn more then that?

1 Like

Hi @paul_sinke

I’m not sure the reasoning either for why 5 is the upper limit. For example we have Customers running with up to 7 instances in a single PXC cluster.

I can only assume it is due to managing resources within Kubernetes so a cap of some description should be set. If you feel you need more than 5 for your application, please feel free to log a JIRA feature request and I am sure the Operators team would be happy to hear you out!

1 Like