We have a 3 server cluster with MYSQL 5.5 in AWS for a high traffic website (1M daily access).
Each access executes always a SELECT + INSERT or UPDATE in a large tables (up to 270M rows) always using indexes.
One server is the master and receives all the write queries. The other two servers are use for read only.
Normally, the PROCESS LIST is pretty clean and all queries are executed very fast.
In situations with peak traffic (5-6 times every day) the master server hangs with 8 queries in wsrep in pre-commit stage.
The rest of the write queries are queued until those 8 are commited.
After reading every post we could find, we changed the wsrep_slave_threads from 8 to 16.
The servers have 8 wsrep_cert_deps_distance goes from 40 up to 250 (normally in the 100’s range).
We are still having the same issue and everytime it happens we find 8 queries in precommit stage (all services ahev been restarted after changing the parameter).
If I understand correctly how the wsrep_slave_threads works, if the server hangs, we should see 16 queries, right?
Please, let me know if we are missing something or you have any ideas what we could try.
Thank you in advance
The wsrep_slave_threads only defines how many threads will be used for parallel applying transactions replicated from other nodes. So, for your writer node (“master”) this does not matter much, but rather it may help the read nodes to apply updates faster.
The connections with write queries/transactions sitting in “wsrep in pre-commit stage” are either waiting on replication phase where writesets are certified and delivered to the other nodes (remember, each node has to ACK them) but may also wait on global cluster pause due to Flow Control. If the write throughput is just too big for the reads nodes to catch up, they will emit lot of short Flow Control pauses, so commits on writer should not stay in that stage for too long. However, the bigger transactions you have, the slower all the process will be and eventually the commit order will hit you. That is, a very big transaction being applied on a peer node, will make the other transactions wait for their turn, as they cannot be applied in different order. It will eventually trigger Flow Control as well if too many awaiting transactions fill the receive queue.
For that reason, it is wise to avoid large transactions in Galera/PXC. And the worst case scenario is if you happen to have big tables without primary keys - a big transaction on such table can basically render your cluster blocked for a very long time.
In your case, up to 270M rows per transaction is just too much for Galera to take it smoothly, you should divide them to much smaller chunks, maybe 10k each or so.
Thank you for your reponse. In our case we should not have big transactions. We have a large number of fast inserts and updates. Daily around 1M.
Could you give us any hint on how to diagnose what is causing the waiting?
Thank you again.
First thing is to check whether Flow Control kicks in. Did you check related status variables? For example, by comparing how many FC pauses were sent and received, as well as how big are the receive queues, you can find which nodes cause problems.