too many FC msg sent from node1

Hi all!
I’m testing PXC 8 with ProxySQL  and sysbench and I’ve noticed, that my node1 is sending too many FC messages. The scenario is sysbench feeding proxysql setup with max_writers=1, so now only third node is receiving traffic and node1 and node2 are only aplying wsrep. Now only node1 is sending FC messages to the cluster and I want to know why and where I should look for the solution.Can you please guide me with this? I can provide any info from logs, proxysql tables, variables etc.

sysbench /usr/share/sysbench/oltp_read_write.lua --db-driver=mysql --mysql-host=127.0.0.1 --mysql-user='sbuser' --mysql-password='sbpass' --mysql-port=6033 --mysql-db=sbtest --tables=1 --table_size=1000000 --db-ps-mode=disable --threads=16 --report-interval=1 --time=1800 --skip-trx=off --mysql-ignore-errors=all run


node1:








Hello split-brain,
FC messages are broadcasted when a nodes receive queue fills up. Your FC is set to 173, which is the default calculation for 3 PXC nodes. One solution is to simply increase your gcs.fc_limit using SET GLOBAL wsrep_provider_options=“gcs.fc_limit=500”.
However, this is probably just a band-aid. I see your VMs have very small amounts of RAM which tells me that you are running a very small, and potentially undersized InnoDB buffer pool. This could be the bottleneck of all your write operations from PXC replication. None of your graphs are showing CPU saturation or disk IO, so I’m guessing here that your disk is probably the secondary issue.
Looking at your graphs, you’re doing about 6K QPS, which, quite honestly, is pretty dang good for only 1.73GB of RAM per VM. I suggest increasing the amount of memory per VM and setting  “innodb_buffer_pool_size” to approximately 80% of your RAM. Additionally, you can turn off per-commit flushing by setting innodb_flush_log_at_trx_commit=2 which should improve disk flushing.

Hey matthewb,thank you for your detailed answer! I’ve followed your recommendations and it solved my issue. Now it looks so much better.Thank you very much!
node1: