Mysql hangs and crashes eventually


we have a 5.6 master and 5.7 slave
The 5.7 slave hangs after some time and crashes some time later
We have this message in the error log of the slave:

OS WAIT ARRAY INFO: reservation count 113540
--Thread 140292566030080 has waited at line 3023 for 747  seconds the semaphore:
X-lock on RW-latch at 0x7f999982e3b0 created in file line 1450
a writer (thread id 140291617060608) has reserved it in mode  SX
number of readers 0, waiters flag 1, lock_word: 10000000
Last time read locked in file not yet reserved line 0
Last time write locked in file /mnt/workspace/percona-server-5.7-redhat-binary-rocks-new/label_exp/min-centos-6-x64/test/rpmbuild/BUILD/percona-server-5.7.35-38/percona-server-5.7.35-38/storage/innobase/buf/ line 1232
OS WAIT ARRAY INFO: signal count 164550
RW-shared spins 0, rounds 162983, OS waits 61922
RW-excl spins 0, rounds 39493, OS waits 2030
RW-sx spins 28350, rounds 96471, OS waits 324
Spin rounds per wait: 162983.00 RW-shared, 39493.00 RW-excl, 3.40 RW-sx
Trx id counter 59793604147
Purge done for trx's n:o < 59793604147 undo n:o < 0 state: running but idle
History list length 1
---TRANSACTION 421804438000080, not started
0 lock struct(s), heap size 1136, 0 row lock(s)
---TRANSACTION 421804437997824, not started
0 lock struct(s), heap size 1136, 0 row lock(s)
---TRANSACTION 59793604098, ACTIVE 748 sec inserting
mysql tables in use 1, locked 1
4 lock struct(s), heap size 1136, 0 row lock(s), undo log entries 961
MySQL thread id 5, OS thread handle 140292566030080, query id 833655 System lock

What is the way to find out what is wrong here ?
Any idea how I can find the cause?


1 Like

Are you using ROW-based replication? Are you also writing to the replica? Do you have foreign keys on the tables in use? What does ‘SHOW PROCESSLIST’ show on the replica while it is hanging? Does replication lag increase? Are you using parallel replication?

1 Like

Hello Matthew,

Thanks for your reply, sorry for the late answer.
We had these issues after moving to a new VMWARE , we moved back to the old environment and now we don’t have these issues anymore.
so we will search what s wrong with the new vmware environment because that must be the issue
Thanks anyway for trying to help!!

1 Like