PXC 8.0.20/21 GTID out of order

Hello,
I am looking for help on failing Galera replication.

Background:
Recently I have upgraded pxc cluster from 5.7 to 8.0.21. On 5.7 there were no issues in production for about a year.
I tried to upgrade first by connecting a new 8.0.21 node through SST and after failing an in-place upgrade on one of the production nodes. It did not work, there were a lot of issues, caused by corrupted tables. Therefore I was forced to dump the database and restore it to a new bootstrapped node running on 8.0.21. This node I have then connected as SQL Replica to 5.7 Galera cluster. Under nightly maintenance, I have turned off the old 5.7 cluster, upgraded its packages to 8.0.21, and connected it to the standalone node with SST.

Environment:
OS: Centos 7.9 3.10.0-1160.11.1.el7.x86_64
HW: Dell R640, 2xIntel Xeon Gold 6246R, 512GB RAM, 2TB Intel NVMe in RAID1
Other Services: There is only the system+services like ssh, PXC, PMM monitoring and Icinga monitoring running on the nodes
Topology: 3 node cluster. Node 1 serves as write/read node, Nodes 2/3 serve as read nodes. In no circumstance should it occur that there is more than one write node. At first balancing was done through ProxySQL, but in the ruling out of potential issues, it was disabled, and balancing is done only through Openshift endpoints for application. GTID is enabled and there is another node, connected as SQL Replica to Node2.

Issue:
The issue first appeared shortly after ending the maintenance. It first appeared on Node2, but since then has been observed on both read-only nodes. The issue appears shortly (5-60 minutes) after putting the node under read load, every time. The recorded GTID of the problematic transaction is always by far larger than actual GTIDs that occurred around the time. For example in Node2 log example the GTID number is 88349568 yet executed GTID on the server was 55015816, while other nodes at the same time had about the same number ranging in 5501xxxx - there is not much lagging in replication, flow control almost does not have to stop nodes.

Log snippets of the issue follow:

Node 3 log example:

2021-01-25T11:10:03.094636Z 73 [Warning] [MY-000000] [WSREP] BF applier failed to open_and_lock_tables: 0, fatal: false wsrep = (exec_mode: high priority conflict_state: must_abort seqno: 40343262) 
2021-01-25T11:10:03.094741Z 73 [Warning] [MY-000000] [WSREP] Event 9 Update_rows apply failed: 1, seqno 40343262
2021-01-25T11:10:03.096038Z 73 [Note] [MY-000000] [Galera] Failed to apply write set: gtid: 070b2ae4-5cd3-11eb-ba3f-57b07a895b85:40343262 server_id: 503af06c-5ea1-11eb-a9be-b6af4b4e4ec3 client_id: 128064 trx_id: 314524130 flags: 3

Node 2 log example:

2021-01-28T14:51:17.088810Z 21 [Warning] [MY-000000] [WSREP] BF applier failed to open_and_lock_tables: 0, fatal: false wsrep = (exec_mode: high priority conflict_state: must_abort seqno: 88349568) 
2021-01-28T14:51:17.088860Z 21 [Warning] [MY-000000] [WSREP] Event 7 Delete_rows apply failed: 1, seqno 88349568
2021-01-28T14:51:17.097781Z 21 [Note] [MY-000000] [Galera] Failed to apply write set: gtid: 070b2ae4-5cd3-11eb-ba3f-57b07a895b85:88349568 server_id: 503af06c-5ea1-11eb-a9be-b6af4b4e4ec3 client_id: 1043006 trx_id: 2987255406 flags: 3

Config snippet follows:

expire_logs_days=3
log-bin=db1-bin
log-slave-updates
wsrep_sst_method=xtrabackup-v2
wsrep_on=ON
wsrep_provider=/usr/lib64/galera4/libgalera_smm.so
wsrep_provider_options="gcache.size=1024M"
wsrep_certification_rules="optimized"
wsrep_slave_threads=120
binlog_format=ROW
default-storage-engine=innodb
innodb_buffer_pool_size = 420G
innodb_buffer_pool_instances = 64
innodb_file_per_table=1
innodb_autoinc_lock_mode=2
innodb_log_file_size=4G
innodb_io_capacity=25000
innodb_io_capacity_max=40000
innodb_flush_log_at_trx_commit=2
sql_mode=NO_ENGINE_SUBSTITUTION
symbolic-links=0  
sync_binlog=0 
pxc_strict_mode=PERMISSIVE 
innodb_flush_sync = off
gtid-mode=ON
enforce_gtid_consistency=ON
pxc_encrypt_cluster_traffic=OFF
default-authentication-plugin=mysql_native_password
explicit_defaults_for_timestamp=0
innodb_undo_log_truncate=1
log_error_verbosity=3
binlog_transaction_dependency_tracking=WRITESET_SESSION
transaction_write_set_extraction=XXHASH64
max_connections = 4000
key_buffer_size = 1G
max_allowed_packet = 1024M
net_read_timeout=3600
net_write_timeout=3600
open_files_limit = 15000
sort_buffer_size = 8M
read_buffer_size = 8M
read_rnd_buffer_size = 16M
myisam_sort_buffer_size = 64M
thread_cache_size = 8
tmp_table_size = 128M
max_heap_table_size = 128M
innodb_read_io_threads = 64
innodb_write_io_threads = 64
1 Like

Hi @jan.sedlak

is this ticket PXC-3593 describe the issue which you see?

1 Like