Running a backup on a node of a cluster using innobackupex v2.2.8. The backup fails with the following:
150212 15:49:05 innobackupex: Finished backing up non-InnoDB tables and files 150212 15:49:05 innobackupex: Executing LOCK BINLOG FOR BACKUP... DBD::mysql::db do failed: Deadlock found when trying to get lock; try restarting transaction at /usr/bin/innobackupex line 3036. innobackupex: got a fatal error with the following stacktrace: at /usr/bin/innobackupex line 3039 main::mysql_query('HASH(0x1adbda0)', 'LOCK BINLOG FOR BACKUP') called at /usr/bin/innobackupex line 3501 main::mysql_lock_binlog('HASH(0x1adbda0)') called at /usr/bin/innobackupex line 2000 main::backup() called at /usr/bin/innobackupex line 1592 innobackupex: Error: Error executing 'LOCK BINLOG FOR BACKUP': DBD::mysql::db do failed: Deadlock found when trying to get lock; try restarting transaction at /usr/bin/innobackupex line 3036. 150212 15:49:05 innobackupex: Waiting for ibbackup (pid=29318) to finish DB Backup ending at Thu 12 Feb 2015 03:49:05 PM MST
At the same time, the mysqld process logs and exits:
22:49:05 UTC - mysqld got signal 11 ; This could be because you hit a bug. It is also possible that this binary or one of the libraries it was linked against is corrupt, improperly built, or misconfigured. This error can also be caused by malfunctioning hardware. We will try our best to scrape up some info that will hopefully help diagnose the problem, but since we have already crashed, something is definitely wrong and this may fail. Please help us make Percona XtraDB Cluster better by reporting any bugs at https://bugs.launchpad.net/percona-xtradb-cluster key_buffer_size=25165824 read_buffer_size=131072 max_used_connections=2 max_threads=202 thread_count=3 connection_count=1 It is possible that mysqld could use up to key_buffer_size + (read_buffer_size + sort_buffer_size)*max_threads = 105204 K bytes of memory Hope that's ok; if not, decrease some variables in the equation. Thread pointer: 0x16598640 Attempting backtrace. You can use the following information to find out where mysqld died. If you see no messages after this, something went terribly wrong... stack_bottom = 7f28bc0e5d38 thread_stack 0x40000 /usr/sbin/mysqld(my_print_stacktrace+0x35)[0x8f97d5] /usr/sbin/mysqld(handle_fatal_signal+0x4b4)[0x6655c4] /lib64/libpthread.so.0[0x396ec0f710] [0x7f25200000a8] Trying to get some variables. Some pointers may be invalid and cause the dump to abort. Query (0): is an invalid pointer Connection ID (thread ID): 14411 Status: KILL_CONNECTION You may download the Percona XtraDB Cluster operations manual by visiting http://www.percona.com/software/percona-xtradb-cluster/. You may find information in the manual which will help you identify the cause of the crash.
Doesn’t happen consistently.
I would think that the backup should just fail, but it shouldn’t crash the mysql server in the process. Alternately, the mysql process crashed and that caused the backup to fail, but then I would think that innobackex shouldn’t be causing the server to crash.
Any ideas on causality here?