Recently a Percona server crashed on my end. I checked the following article and I was able to gather some information:
[url]https://www.percona.com/blog/2015/08/17/mysql-is-crashing-a-support-engineers-point-of-view/[/url]
Here is the info I was able to obtain:
2017-08-03 12:58:03 7f1022381700 InnoDB: Error: Write to file ./ib_logfile1 failed at offset 52673024.
InnoDB: 512 bytes should have been written, only 0 were written.
InnoDB: Operating system error number 12.
InnoDB: Check that your OS and file system support files of this size.
InnoDB: Check also that the disk is not full or a disk quota exceeded.
InnoDB: Error number 12 means ‘Cannot allocate memory’.
InnoDB: Some operating system error numbers are described at
InnoDB: [url]http://dev.mysql.com/doc/refman/5.6/en/operating-system-error-codes.html[/url]
2017-08-03 12:58:03 7f1022381700 InnoDB: Assertion failure in thread 139707270305536 in file fil0fil.cc line 5863
InnoDB: Failing assertion: ret
InnoDB: We intentionally generate a memory trap.
InnoDB: Submit a detailed bug report to http://bugs.mysql.com.
InnoDB: If you get repeated assertion failures or crashes, even
InnoDB: immediately after the mysqld startup, there may be
InnoDB: corruption in the InnoDB tablespace. Please refer to
InnoDB: [url]http://dev.mysql.com/doc/refman/5.6/en/forcing-innodb-recovery.html[/url]
InnoDB: about forcing recovery.
12:58:03 UTC - mysqld got signal 6 ;
This could be because you hit a bug. It is also possible that this binary
or one of the libraries it was linked against is corrupt, improperly built,
or misconfigured. This error can also be caused by malfunctioning hardware.
We will try our best to scrape up some info that will hopefully help
diagnose the problem, but since we have already crashed,
something is definitely wrong and this may fail.
root@servername [/usr/src/debug]# resolve_stack_dump -s /root/mysqld.sym -n /root//stack
stack_bottom = 0 thread_stack 0x40000
0x8d90fc my_init_stacktrace + 12
0x65ac01 handle_fatal_signal + 577
0x7f1057d487e0 _end + 1453210488
0x7f1055f76495 _end + 1421940781
0x7f1055f77c75 _end + 1421946893
0xa8e85d _Z24fil_space_get_first_pathm + 125
0x946747 _Z15log_io_completeP11log_group_t + 7
0x946ec3 _Z15log_io_completeP11log_group_t + 1923
0x94867b _Z15log_write_up_tommm + 43
0x9d07f4 _Z19purge_archived_logslm + 1284
0x7f1057d40aa1 _end + 1453178425
0x7f105602cbcd _end + 1422688101
I do not have a core dump. The server which hosts this MySQL is a Linux container and according to some logs the container was out of memory. There is no swap on the container. The process was not killed by OOM because we have configured the following for the MySQL :
root@servername [/usr/src/debug]# cat /proc/55067/oom_*
-17
0
-1000
root@servername [/usr/src/debug]# cat /proc/11054/oom_*
-17
0
-1000
I suppose that when there is no memory the MySQL should not crash but become slow. Can you please help so that we can further investigate the issue and see if this is a MySQL/Percona bug or a problem on our end.