2025-09-11T09:00:50Z UTC - mysqld got signal 11 ;
Most likely, you have hit a bug, but this error can also be caused by malfunctioning hardware.
BuildID[sha1]=ae1efca2c0800a60ab995705a17c005525d455fc
Thread pointer: 0x7fa07afb09d0
Attempting backtrace. You can use the following information to find out
where mysqld died. If you see no messages after this, something went
terribly wrong…
stack_bottom = 7fa2e457ebd0 thread_stack 0x100000
/usr/local/mysql/bin/mysqld(my_print_stacktrace(unsigned char const*, unsigned long)+0x3d) [0x21ccb8d]
/usr/local/mysql/bin/mysqld(print_fatal_signal(int)+0x393) [0x1035bf3]
/usr/local/mysql/bin/mysqld(handle_fatal_signal+0xa5) [0x1035ca5]
/lib64/libpthread.so.0(+0x12990) [0x7fa451d9b990]
/usr/local/mysql/bin/mysqld(wsrep_handle_mdl_conflict(MDL_context const*, MDL_ticket*)+0x91) [0x10559c1]
/usr/local/mysql/bin/mysqld(MDL_lock::can_grant_lock(enum_mdl_type, MDL_context const*) const+0x4fb) [0x135c84b]
/usr/local/mysql/bin/mysqld(MDL_context::try_acquire_lock_impl(MDL_request*, MDL_ticket**)+0x6ba) [0x136020a]
/usr/local/mysql/bin/mysqld(MDL_context::acquire_lock(MDL_request*, unsigned long)+0xa6) [0x1360676]
/usr/local/mysql/bin/mysqld(MDL_context::acquire_locks(I_P_List<MDL_request, I_P_List_adapter<MDL_request, &MDL_request::next_in_list, &MDL_request::prev_in_list>, I_P_List_counter, I_P_List_no_push_back<MDL_request> >, unsigned long)+0x2b2) [0x13618d2]
/usr/local/mysql/bin/mysqld() [0xf5c345]
/usr/local/mysql/bin/mysqld(mysql_alter_table(THD, char const*, char const*, HA_CREATE_INFO*, Table_ref*, Alter_info*)+0x5326) [0xf7bd16]
/usr/local/mysql/bin/mysqld(Sql_cmd_alter_table::execute(THD*)+0x6cb) [0x13f922b]
/usr/local/mysql/bin/mysqld(mysql_execute_command(THD*, bool)+0x1529) [0xec0379]
/usr/local/mysql/bin/mysqld(dispatch_sql_command(THD*, Parser_state*)+0x520) [0xec52d0]
/usr/local/mysql/bin/mysqld() [0xec55bb]
/usr/local/mysql/bin/mysqld(dispatch_command(THD*, COM_DATA const*, enum_server_command)+0x1b92) [0xec85a2]
/usr/local/mysql/bin/mysqld(do_command(THD*)+0x2e3) [0xecb4c3]
/usr/local/mysql/bin/mysqld() [0x1025038]
/usr/local/mysql/bin/mysqld() [0x28873a5]
/lib64/libpthread.so.0(+0x81ca) [0x7fa451d911ca]
/lib64/libc.so.6(clone+0x43) [0x7fa4500fd8d3]
Trying to get some variables.
Some pointers may be invalid and cause the dump to abort.
Query (7fa0104727f0): is an invalid pointer
Connection ID (thread ID): 1380845
Status: NOT_KILLED
version mysql 8.0.41
version galera 4.23
For the detailed error log, please refer to error3306; for the system log, please refer to messages; and for the configuration file, please refer to my.txt. All these files have been uploaded in the reply post I made below.
2025-09-11T09:00:50Z UTC - mysqld got signal 11 ;
Most likely, you have hit a bug, but this error can also be caused by malfunctioning hardware.
BuildID[sha1]=ae1efca2c0800a60ab995705a17c005525d455fc
Thread pointer: 0x7fa07afb09d0
Attempting backtrace. You can use the following information to find out
where mysqld died. If you see no messages after this, something went
terribly wrong…
stack_bottom = 7fa2e457ebd0 thread_stack 0x100000
/usr/local/mysql/bin/mysqld(my_print_stacktrace(unsigned char const*, unsigned long)+0x3d) [0x21ccb8d]
/usr/local/mysql/bin/mysqld(print_fatal_signal(int)+0x393) [0x1035bf3]
/usr/local/mysql/bin/mysqld(handle_fatal_signal+0xa5) [0x1035ca5]
/lib64/libpthread.so.0(+0x12990) [0x7fa451d9b990]
/usr/local/mysql/bin/mysqld(wsrep_handle_mdl_conflict(MDL_context const*, MDL_ticket*)+0x91) [0x10559c1]
/usr/local/mysql/bin/mysqld(MDL_lock::can_grant_lock(enum_mdl_type, MDL_context const*) const+0x4fb) [0x135c84b]
/usr/local/mysql/bin/mysqld(MDL_context::try_acquire_lock_impl(MDL_request*, MDL_ticket**)+0x6ba) [0x136020a]
/usr/local/mysql/bin/mysqld(MDL_context::acquire_lock(MDL_request*, unsigned long)+0xa6) [0x1360676]
/usr/local/mysql/bin/mysqld(MDL_context::acquire_locks(I_P_List<MDL_request, I_P_List_adapter<MDL_request, &MDL_request::next_in_list, &MDL_request::prev_in_list>, I_P_List_counter, I_P_List_no_push_back<MDL_request> >, unsigned long)+0x2b2) [0x13618d2]
/usr/local/mysql/bin/mysqld() [0xf5c345]
/usr/local/mysql/bin/mysqld(mysql_alter_table(THD, char const*, char const*, HA_CREATE_INFO*, Table_ref*, Alter_info*)+0x5326) [0xf7bd16]
/usr/local/mysql/bin/mysqld(Sql_cmd_alter_table::execute(THD*)+0x6cb) [0x13f922b]
/usr/local/mysql/bin/mysqld(mysql_execute_command(THD*, bool)+0x1529) [0xec0379]
/usr/local/mysql/bin/mysqld(dispatch_sql_command(THD*, Parser_state*)+0x520) [0xec52d0]
/usr/local/mysql/bin/mysqld() [0xec55bb]
/usr/local/mysql/bin/mysqld(dispatch_command(THD*, COM_DATA const*, enum_server_command)+0x1b92) [0xec85a2]
/usr/local/mysql/bin/mysqld(do_command(THD*)+0x2e3) [0xecb4c3]
/usr/local/mysql/bin/mysqld() [0x1025038]
/usr/local/mysql/bin/mysqld() [0x28873a5]
/lib64/libpthread.so.0(+0x81ca) [0x7fa451d911ca]
/lib64/libc.so.6(clone+0x43) [0x7fa4500fd8d3]
Trying to get some variables.
Some pointers may be invalid and cause the dump to abort.
Query (7fa0104727f0): is an invalid pointer
Connection ID (thread ID): 1380845
Status: NOT_KILLED
version mysql 8.0.41
version galera 4.23
For the detailed error log, please refer to error3306; for the system log, please refer to messages; and for the configuration file, please refer to my.txt.
This issue appears to be related to rsync during state transfer. Have you verified end-to-end network connectivity between this node and the other cluster members (e.g., checking TCP reachability, firewall rules, and MTU consistency)?
Another error I see is:
2025-09-08T13:27:39.851025+08:00 15 [ERROR] [MY-013117] [Repl] Replica I/O for channel '': Fatal error: The replica I/O thread stops because source and replica have equal MySQL server ids; these ids must be different for replication to work (or the --replicate-same-server-id option must be used on replica but this does not always make sense; please check the manual before using it). Error_code: MY-013117
It appears that this node has an asynchronous replica attached, but both the primary node and the replica are configured with the same server_id. This configuration conflict will prevent replication from functioning correctly.
Additionally, the Signal 11 (segmentation fault) indicates that the MySQL process crashed unexpectedly. After the crash, did you attempt to restart the node? If so, was it able to rejoin the cluster and establish connectivity?
@ocean,
Switch to using xtrabackupv2 as the SST method. The rsync method is extremely old, and may have issues with larger data sets. The xtrabackup method uses the latest features, and most recent versions to handle multi-TB datasets without issue.
Thank you very much for your reply. We have two Galera clusters, and the node with the issue belongs to a cluster that once served as the slave database of the other cluster. A switchover was performed on September 8th, and after the switchover, the cluster was used normally until September 11th.
After the outage, the node has not yet been restarted. This is because we have configured a full rsync synchronization, which requires a low-traffic period of business operations to complete the restart. Additionally, the root cause of the database outage has not been identified yet.
Thank you for taking the time to assist us. We plan to switch our full synchronization method from the current one to the xtrabackup-v2 method in the future.
The only difference between this faulty node and the other two normally functioning nodes is that we perform a daily logical backup on this node using mysqldump. I have a suspicion: could this logical backup be the cause of the database failure?
Additionally, I noticed that there were quite a few issues related to metadata locks when the failure occurred, so I also suspect whether the failure was caused by DDL operations. After checking the logs, I confirmed that there were indeed a large number of DDL operations. However, it is important to note that similar DDL statements have existed for a long time.
I still have one question. From the logs, we can see that it was still running at 17:00 on September 11th, but the time when “mysqld got signal 11” occurred was 09:00 in the morning.
Unlikely, however you should stop using mysqldump and switch to mydumper. Mydumper will complete your logical backup in 1/2 the amount of time, if not more, than mysqldump due to Mydumper being multi-threaded.
If you’re doing lots of DDL, and attempting to take a backup at the same time, this could have been an issue.
Thank you for your reply. Next, I will follow your suggestion: first stop the backup, then notify the R&D team to reduce DDL operations, observe for a period of time to see the effect, and then feed back to you. Thank you again for your help.