XtraDB cluster is down while changing expired password from navicat12 client

Hi dears,

I’ve installed percona xtradb cluster 5.7 on 3 nodes. OS is centOS 5.7.
When I connect as expired user using Navicat12 client, ask me to change my password. While I change my password, all nodesc are not working and below error is occurred. (For Navicat11, there is no problem.)

04:51:21 UTC - mysqld got signal 11 ;
This could be because you hit a bug. It is also possible that this binary
or one of the libraries it was linked against is corrupt, improperly built,
or misconfigured. This error can also be caused by malfunctioning hardware.
Attempting to collect some information that could help diagnose the problem.
As this is a crash and something is definitely wrong, the information
collection process might fail.
Please help us make Percona XtraDB Cluster better by reporting any
bugs at [url]https://jira.percona.com/projects/PXC/issues[/url]

key_buffer_size=134217728
read_buffer_size=131072
max_used_connections=2
max_threads=10001
thread_count=19
connection_count=2
It is possible that mysqld could use up to
key_buffer_size + (read_buffer_size + sort_buffer_size)*max_threads = 4118039 K bytes of memory
Hope that’s ok; if not, decrease some variables in the equation.

Thread pointer: 0x7fb33c0008c0
Attempting backtrace. You can use the following information to find out
where mysqld died. If you see no messages after this, something went
terribly wrong…
stack_bottom = 7fb3845f2950 thread_stack 0x40000
/usr/sbin/mysqld(my_print_stacktrace+0x3b)[0xf36d5b]
/usr/sbin/mysqld(handle_fatal_signal+0x471)[0x7ad461]
/lib64/libpthread.so.0(+0xf5d0)[0x7fb6972615d0]
/usr/sbin/mysqld(_Z21mysql_execute_commandP3THDb+0x4a80)[0xcf3e50]
/usr/sbin/mysqld(_Z11mysql_parseP3THDP12Parser_state+0x655)[0xcf7bf5]
/usr/sbin/mysqld(_ZN15Query_log_event14do_apply_eventEPK14Relay_log_infoPKcm+0xaff)[0xea357f]
/usr/sbin/mysqld(_ZN9Log_event11apply_eventEP14Relay_log_info+0x6d)[0xea158d]
/usr/sbin/mysqld(_Z14wsrep_apply_cbPvPKvmjPK14wsrep_trx_meta+0x65a)[0x7c7b3a]
/usr/lib64/galera3/libgalera_smm.so(ZNK6galera9TrxHandle5applyEPvPF15wsrep_cb_statusS1_PKvmjPK14wsrep_trx_metaERS6+0xfd)[0x7fb68ad1c0ed]
/usr/lib64/galera3/libgalera_smm.so(+0x200af3)[0x7fb68ad5baf3]
/usr/lib64/galera3/libgalera_smm.so(_ZN6galera13ReplicatorSMM9apply_trxEPvPNS_9TrxHandleE+0x199)[0x7fb68ad5e719]
/usr/lib64/galera3/libgalera_smm.so(_ZN6galera13ReplicatorSMM11process_trxEPvPNS_9TrxHandleE+0x14e)[0x7fb68ad6182e]
/usr/lib64/galera3/libgalera_smm.so(_ZN6galera15GcsActionSource8dispatchEPvRK10gcs_actionRb+0x1d0)[0x7fb68ad3bca0]
/usr/lib64/galera3/libgalera_smm.so(_ZN6galera15GcsActionSource7processEPvRb+0x66)[0x7fb68ad3d406]
/usr/lib64/galera3/libgalera_smm.so(_ZN6galera13ReplicatorSMM10async_recvEPv+0x8b)[0x7fb68ad6206b]
/usr/lib64/galera3/libgalera_smm.so(galera_recv+0x2c)[0x7fb68ad7432c]
/usr/sbin/mysqld[0x7c900a]
/usr/sbin/mysqld(start_wsrep_THD+0x222)[0x79e182]
/usr/sbin/mysqld(pfs_spawn_thread+0x1b4)[0xf4f574]
/lib64/libpthread.so.0(+0x7dd5)[0x7fb697259dd5]
/lib64/libc.so.6(clone+0x6d)[0x7fb695429ead]

Trying to get some variables.
Some pointers may be invalid and cause the dump to abort.
Query (7fb33c0132d9): is an invalid pointer
Connection ID (thread ID): 12
Status: NOT_KILLED

Thanks

Hello munkhzul can I check in with you if this is a development, test, or production system please?
Also the team often ask me to get a copy of the my.cnf from each node and any error logs that have been written, if you could get those together it might help if the answer is not straightforward.
Thanks!

Hi,

Pls see the below config file and error log.

----------- my.cnf on node1 -------

The Percona XtraDB Cluster 5.7 configuration file.

[mysql]

CLIENT

port = 3306
socket = /var/lib/mysql/mysql.sock

[mysqld]
server-id = 1

START PERCONA XTRA DB CLUSTER

binlog_format = ROW
wsrep_cluster_name = test_cluster_new
wsrep_node_name = test1
wsrep_cluster_address = gcomm://node1ip,node2ip,node3ip
wsrep_node_address = node1ip
wsrep_provider = /usr/lib64/galera3/libgalera_smm.so
wsrep_sst_method = xtrabackup-v2
wsrep_sst_auth = sstuser:XXXXX
wsrep_sst_donor = test3
innodb_autoinc_lock_mode = 2
wsrep_slave_threads = 16
wsrep_provider_options = “gcache.size=10G; gcache.page_size=5G”
pxc_strict_mode = DISABLED

END CLUSTER

GENERAL

----------- my.cnf on node2 -------

server-id = 2

START PERCONA XTRA DB CLUSTER

binlog_format = ROW
wsrep_cluster_name = test_cluster_new
wsrep_node_name = test2
wsrep_cluster_address = gcomm://node1ip,node2ip,node3ip
wsrep_node_address = node2ip
wsrep_provider = /usr/lib64/galera3/libgalera_smm.so
wsrep_sst_method = xtrabackup-v2
wsrep_sst_auth = sstuser:XXXXX
wsrep_sst_donor = test3
innodb_autoinc_lock_mode = 2
wsrep_slave_threads = 16
wsrep_provider_options = “gcache.size=10G; gcache.page_size=5G”
pxc_strict_mode = DISABLED

END CLUSTER

GENERAL

----------- my.cnf on node3 -------

server-id = 3

START PERCONA XTRA DB CLUSTER

binlog_format = ROW
wsrep_cluster_name = test_cluster_new
wsrep_node_name = test3
wsrep_cluster_address = gcomm://node1ip,node2ip,node3ip
wsrep_node_address = node3ip
wsrep_provider = /usr/lib64/galera3/libgalera_smm.so
wsrep_sst_method = xtrabackup-v2
wsrep_sst_auth = sstuser:XXXXX
wsrep_sst_donor = test3
innodb_autoinc_lock_mode = 2
wsrep_slave_threads = 16
wsrep_provider_options = “gcache.size=10G; gcache.page_size=5G”
pxc_strict_mode = DISABLED

END CLUSTER

GENERAL

---------------------------------- Error log --------------------------

error log on node2, node3

2019-11-04T23:36:44.163951-05:00 0 [Note] WSREP: (69292311, ‘tcp://0.0.0.0:4567’) turning message relay requesting off
04:51:22 UTC - mysqld got signal 11 ;
This could be because you hit a bug. It is also possible that this binary
or one of the libraries it was linked against is corrupt, improperly built,
or misconfigured. This error can also be caused by malfunctioning hardware.
Attempting to collect some information that could help diagnose the problem.
As this is a crash and something is definitely wrong, the information
collection process might fail.
Please help us make Percona XtraDB Cluster better by reporting any
bugs at [url]https://jira.percona.com/projects/PXC/issues[/url]

key_buffer_size=134217728
read_buffer_size=131072
max_used_connections=1
max_threads=10001
thread_count=3
connection_count=0
It is possible that mysqld could use up to
key_buffer_size + (read_buffer_size + sort_buffer_size)*max_threads = 4118039 K bytes of memory
Hope that’s ok; if not, decrease some variables in the equation.

Thread pointer: 0x7f97500008c0
Attempting backtrace. You can use the following information to find out
where mysqld died. If you see no messages after this, something went
terribly wrong…
stack_bottom = 7f97e8be6950 thread_stack 0x40000
/usr/sbin/mysqld(my_print_stacktrace+0x3b)[0xf36d5b]
/usr/sbin/mysqld(handle_fatal_signal+0x471)[0x7ad461]
/lib64/libpthread.so.0(+0xf5d0)[0x7f9a891f65d0]
/usr/sbin/mysqld(_Z21mysql_execute_commandP3THDb+0x4a80)[0xcf3e50]
/usr/sbin/mysqld(_Z11mysql_parseP3THDP12Parser_state+0x655)[0xcf7bf5]
/usr/sbin/mysqld(_ZN15Query_log_event14do_apply_eventEPK14Relay_log_infoPKcm+0xaff)[0xea357f]
/usr/sbin/mysqld(_ZN9Log_event11apply_eventEP14Relay_log_info+0x6d)[0xea158d]
/usr/sbin/mysqld(_Z14wsrep_apply_cbPvPKvmjPK14wsrep_trx_meta+0x65a)[0x7c7b3a]
/usr/lib64/galera3/libgalera_smm.so(ZNK6galera9TrxHandle5applyEPvPF15wsrep_cb_statusS1_PKvmjPK14wsrep_trx_metaERS6+0xfd)[0x7f9a7ccb10ed]
/usr/lib64/galera3/libgalera_smm.so(+0x200af3)[0x7f9a7ccf0af3]
/usr/lib64/galera3/libgalera_smm.so(_ZN6galera13ReplicatorSMM9apply_trxEPvPNS_9TrxHandleE+0x199)[0x7f9a7ccf3719]
/usr/lib64/galera3/libgalera_smm.so(_ZN6galera13ReplicatorSMM11process_trxEPvPNS_9TrxHandleE+0x14e)[0x7f9a7ccf682e]
/usr/lib64/galera3/libgalera_smm.so(_ZN6galera15GcsActionSource8dispatchEPvRK10gcs_actionRb+0x1d0)[0x7f9a7ccd0ca0]
/usr/lib64/galera3/libgalera_smm.so(_ZN6galera15GcsActionSource7processEPvRb+0x66)[0x7f9a7ccd2406]
/usr/lib64/galera3/libgalera_smm.so(_ZN6galera13ReplicatorSMM10async_recvEPv+0x8b)[0x7f9a7ccf706b]
/usr/lib64/galera3/libgalera_smm.so(galera_recv+0x2c)[0x7f9a7cd0932c]
/usr/sbin/mysqld[0x7c900a]
/usr/sbin/mysqld(start_wsrep_THD+0x222)[0x79e182]
/usr/sbin/mysqld(pfs_spawn_thread+0x1b4)[0xf4f574]
/lib64/libpthread.so.0(+0x7dd5)[0x7f9a891eedd5]
/lib64/libc.so.6(clone+0x6d)[0x7f9a873beead]

Trying to get some variables.
Some pointers may be invalid and cause the dump to abort.
Query (7f97500132d9): is an invalid pointer
Connection ID (thread ID): 4
Status: NOT_KILLED

You may download the Percona XtraDB Cluster operations manual by visiting
[url]http://www.percona.com/software/percona-xtradb-cluster/[/url]. You may find information
in the manual which will help you identify the cause of the crash.

Hello again, munkhzul it’s possible that you have stumbled on an issue that we are not aware of.
How would you feel about raising a bug report at jira.percona.com ?
For this we would need some more details in addition to the information you have provided above:
[LIST]
[]specific version of Percona XtraDB Cluster (5.7 is not quite enough)
[
]specific info on environment
[*]if possible a reproducible test case
[/LIST] If you could do that it would be greatly appreciated!

Hi, Thank you for reply.
[LIST]
[]specific version of Percona XtraDB Cluster (5.7 is not quite enough)
[/LIST] Percona version: 5.7.23-23-57-log Percona XtraDB Cluster (GPL), Release rel23, Revision f5578f0, WSREP version 31.31, wsrep_31.31
[LIST]
[
]specific info on environment
[/LIST] OS info: CentOS Linux release 7.6.1810 (Core), Linux test1 3.10.0-957.el7.x86_64 #1 SMP Thu Nov 8 23:39:32 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux
[LIST]
[*]if possible a reproducible test case
[/LIST] When I connect as expired user using Navicat12 client, ask me to change my password. While I change my password, all nodesc are not working. For Navicat11, there is no problem.