Troubleshooting Cluster Crashes: Issues with ALTER TABLE and FULLTEXT Indexes on Large Datasets

Hello, I have a cluster with 4 nodes. When I perform an alter table on a table with 500 thousand records, the cluster crashes. Similarly, if I create a FULLTEXT index on a table with more than 500 thousand records, all nodes crash. Could you help me resolve this?

I need to perform the SST (200GB) every time, and it takes almost 3 hours.

4 MySQL machines
Red Hat Enterprise Linux release 9.3 (Plow) x86_64
12 CPU Cores / 31508 MB RAM
Intel(R) Xeon(R) CPU E5-2620 v3 @ 2.40GHz

Below are the variables of WSREP:

wsrep_applier_FK_checks ON
wsrep_applier_threads 8
wsrep_applier_UK_checks OFF
wsrep_auto_increment_control ON
wsrep_causal_reads OFF
wsrep_certification_rules strict
wsrep_certify_nonPK ON
wsrep_cluster_address gcomm://10.33.19.24,10.33.19.25,10.33.19.26,10.33.19.22
wsrep_cluster_name my_cluster
wsrep_data_home_dir /mys01/mysql/
wsrep_dbug_option
wsrep_debug NONE
wsrep_desync OFF
wsrep_dirty_reads OFF
wsrep_disk_pages_encrypt NONE
wsrep_gcache_encrypt NONE
wsrep_ignore_apply_errors 0
wsrep_load_data_splitting OFF
wsrep_log_conflicts ON
wsrep_max_ws_rows 0
wsrep_max_ws_size 2147483647
wsrep_min_log_verbosity 3
wsrep_mode
wsrep_node_address 10.33.19.24
wsrep_node_incoming_address AUTO
wsrep_node_name nodenamesistem
wsrep_notify_cmd
wsrep_on ON
wsrep_OSU_method TOI
wsrep_provider /usr/lib64/galera4/libgalera_smm.so
wsrep_provider_options allocator.disk_pages_encryption = no; allocator.encryption_cache_page_size = 32K; allocator.encryption_cache_size = 16777216; base_dir = /mys01/mysql/; base_host = 10.33.19.24; base_port = 4567; cert.log_conflicts = no; cert.optimistic_pa = no; debug = no; evs.auto_evict = 0; evs.causal_keepalive_period = PT1S; evs.debug_log_mask = 0x1; evs.delay_margin = PT1S; evs.delayed_keep_period = PT30S; evs.inactive_check_period = PT0.5S; evs.inactive_timeout = PT15S; evs.info_log_mask = 0; evs.install_timeout = PT7.5S; evs.join_retrans_period = PT1S; evs.keepalive_period = PT1S; evs.max_install_timeouts = 3; evs.send_window = 10; evs.stats_report_period = PT1M; evs.suspect_timeout = PT5S; evs.use_aggregate = true; evs.user_send_window = 4; evs.version = 1; evs.view_forget_timeout = P1D; gcache.dir = /mys03/mysql; gcache.encryption = no; gcache.encryption_cache_page_size = 32K; gcache.encryption_cache_size = 16777216; gcache.freeze_purge_at_seqno = -1; gcache.keep_pages_count = 0; gcache.keep_pages_size = 0; gcache.mem
wsrep_recover OFF
wsrep_reject_queries NONE
wsrep_replicate_myisam OFF
wsrep_restart_replica OFF
wsrep_restart_slave OFF
wsrep_retry_autocommit 1
wsrep_RSU_commit_timeout 5000
wsrep_slave_FK_checks ON
wsrep_slave_threads 8
wsrep_slave_UK_checks OFF
wsrep_SR_store table
wsrep_sst_allowed_methods xtrabackup-v2
wsrep_sst_donor
wsrep_sst_donor_rejects_queries OFF
wsrep_sst_method xtrabackup-v2
wsrep_sst_receive_address AUTO
wsrep_start_position 952e6cc0-8291-11ee-87dd-c6c2084194ba:41755408
wsrep_sync_wait 0
wsrep_trx_fragment_size 0
wsrep_trx_fragment_unit bytes

Hello @kurole,
Just to confirm, this is using Percona Everest? Also, the above you provided does not help diagnose your issue. You said the nodes crash? Do you have crash logs? Core dumps? We will need such things to help.

hi, i check…
Percona XtraDB Cluster (GPL)
I will provide the other records

error in console:

[ERROR] [MY-000000] [Galera] Inconsistency detected: Inconsistent by consensus on 952e6cc0-8291-11ee-87dd-c6c