mysqld dies with semaphore wait

Dear Sirs and Ma’ams,

Thank you for your work on PMM. I am using it in AWS to monitor RDS instances. It has been mostly working great, except for in our production environment. There we are running 1.17 in a docker on a t2.large instance with a gp2 ebs volume monitoring 8 rds instances. Everything starts out fine, but a few days or so into the docker running, mysqld dies in the container. At this point, the graphs work, but the Query Analytics stop. Please find attached the the output from mysql.log

Thanks,

Jerm

InnoDB: ###### Diagnostic info printed to the standard error stream
InnoDB: Error: semaphore wait has lasted > 600 seconds
InnoDB: We intentionally crash the server, because it appears to be hung.
190304 5:49:54 InnoDB: Assertion failure in thread 140219024246528 in file srv0srv.c line 2980
InnoDB: We intentionally generate a memory trap.
InnoDB: Submit a detailed bug report to http://bugs.mysql.com.
InnoDB: If you get repeated assertion failures or crashes, even
InnoDB: immediately after the mysqld startup, there may be
InnoDB: corruption in the InnoDB tablespace. Please refer to
InnoDB: [url]http://dev.mysql.com/doc/refman/5.5/en/forcing-innodb-recovery.html[/url]
InnoDB: about forcing recovery.
05:49:54 UTC - mysqld got signal 6 ;
This could be because you hit a bug. It is also possible that this binary
or one of the libraries it was linked against is corrupt, improperly built,
or misconfigured. This error can also be caused by malfunctioning hardware.
We will try our best to scrape up some info that will hopefully help
diagnose the problem, but since we have already crashed,
something is definitely wrong and this may fail.
Please help us make Percona Server better by reporting any
bugs at [url]System Dashboard - Percona JIRA

key_buffer_size=8388608
read_buffer_size=131072
max_used_connections=61
max_threads=514
thread_count=58
connection_count=58
It is possible that mysqld could use up to
key_buffer_size + (read_buffer_size + sort_buffer_size)*max_threads = 1133032 K bytes of memory
Hope that’s ok; if not, decrease some variables in the equation.

Thread pointer: 0x0
Attempting backtrace. You can use the following information to find out
where mysqld died. If you see no messages after this, something went
terribly wrong…
stack_bottom = 0 thread_stack 0x40000
/usr/sbin/mysqld(my_print_stacktrace+0x3b)[0x7debfb]
/usr/sbin/mysqld(handle_fatal_signal+0x4b1)[0x6a5471]
/lib64/libpthread.so.0(+0xf6d0)[0x7f875f9956d0]
/lib64/libc.so.6(gsignal+0x37)[0x7f875e0c0277]
/lib64/libc.so.6(abort+0x148)[0x7f875e0c1968]
/usr/sbin/mysqld[0x83afd5]
/lib64/libpthread.so.0(+0x7e25)[0x7f875f98de25]
/lib64/libc.so.6(clone+0x6d)[0x7f875e188bad]
You may download the Percona Server operations manual by visiting
[url]http://www.percona.com/software/percona-server/[/url]. You may find information
in the manual which will help you identify the cause of the crash.

Hi jerm

It appears you have a crashing MySQL instance you can try to take action to save the installation, for this advice I would encourage you to head over to the Percona forums related to support of Percona Server for MySQL: [URL]https://www.percona.com/forums/questions-discussions/mysql-and-percona-server[/URL]

If you can post this as a case related to Percona Server crashing we’ll then get more eyes on the issue and bring some sort of actions together to eliminate the crashing condition.

I suggest as part of your post that you share the System Overview dashboard for a period spanning crashes, using the pmm-server identifier. Here’s an example:
[URL=“Grafana”]Grafana

I hope this helps