Percona Operator, MySQL server has gone away, ProxySQL pods restart fixes the issue

I have encountered an issue that I was able to easily reproduce on multiple setups with minimal hassle. To reproduce it, I simply restart random node, wait until all pods statuses are OK again and the cluster size is back to normal, and then try restarting another node. The maximum number of restarts to reproduce this issue is 4 for me; your mileage may vary.

After reaching this point, ProxySQL no longer functions properly. It allows somewhere between 1 and 50 inserts before reporting that the MySQL server has gone away. Fortunately, it’s easy to fix—simply restart the ProxySQL pods (delete all the proxysql pods, and Kubernetes will recreate them).

It would be preferable not to have to manually restart any pods to keep my cluster online.

Cheers!

I have decided to switch to HAProxy for the time being. It appears to handle nodes going down and rejoining without causing an outage, retaining undistrupted access to the database, at least in some instances. This seems to be a more stable alternative.