Nodes terminated when addeted new

Hello!

I have cluster of 2 nodes (master-master). One of them master for slave.
When i tried add new master node in cluster Donor node and acceptor fall down. On donor node in innobackup.backup.log i see next:
150121 14:49:11 innobackupex: Finished backing up non-InnoDB tables and files

150121 14:49:11 innobackupex: Executing LOCK BINLOG FOR BACKUP…
DBD::mysql::db do failed: Deadlock found when trying to get lock; try restarting transaction at /usr//bin/innobackupex line 3036.
innobackupex: got a fatal error with the following stacktrace: at /usr//bin/innobackupex line 3039
main::mysql_query(‘HASH(0x10a9720)’, ‘LOCK BINLOG FOR BACKUP’) called at /usr//bin/innobackupex line 3501
main::mysql_lock_binlog(‘HASH(0x10a9720)’) called at /usr//bin/innobackupex line 2000
main::backup() called at /usr//bin/innobackupex line 1592
innobackupex: Error:
Error executing ‘LOCK BINLOG FOR BACKUP’: DBD::mysql::db do failed: Deadlock found when trying to get lock; try restarting transaction at /usr//bin/innobackupex line 3036.
150121 14:49:11 innobackupex: Waiting for ibbackup (pid=44712) to finish

Versions:
mysqld Ver 5.6.21-70.1-56 for Linux on x86_64 (Percona XtraDB Cluster (GPL), Release rel70.1, Revision 938, WSREP version 25.8, wsrep_25.8.r4150)

xtrabackup version 2.2.8 based on MySQL server 5.6.22 Linux (x86_64) (revision id: )

I tried to setup new node and got same error.

You may be hitting this bug: [url]https://bugs.launchpad.net/percona-xtradb-cluster/+bug/1401133[/url]
Did mysql also crash like mentioned in the bug report?

I got the same issue since March 11, when I installed the new Percona version
Mar 11 10:51:32 Updated: Percona-XtraDB-Cluster-galera-3-3.9-1.3494.rhel6.x86_64
Mar 11 10:51:33 Updated: 1:Percona-XtraDB-Cluster-client-56-5.6.22-25.8.978.el6.x86_64
Mar 11 10:51:35 Updated: percona-xtrabackup-2.2.9-5067.el6.x86_64
Mar 11 10:51:35 Updated: 1:Percona-XtraDB-Cluster-shared-56-5.6.22-25.8.978.el6.x86_64
Mar 11 10:52:14 Updated: 1:Percona-XtraDB-Cluster-server-56-5.6.22-25.8.978.el6.x86_64
Mar 11 10:52:14 Updated: 1:Percona-XtraDB-Cluster-56-5.6.22-25.8.978.el6.x86_64

my backup crash every night over the same point

mysql error log

2015-03-23 16:06:47 3443 [Note] WSREP: Shifting JOINED → SYNCED (TO: 122394644)
2015-03-23 16:06:47 3443 [Note] WSREP: Synchronized with group, ready for connections
2015-03-23 16:06:50 3443 [Note] WSREP: (045575ff, ‘tcp://0.0.0.0:4567’) turning message relay requesting off
03:50:04 UTC - mysqld got signal 11 ;
This could be because you hit a bug. It is also possible that this binary
or one of the libraries it was linked against is corrupt, improperly built,
or misconfigured. This error can also be caused by malfunctioning hardware.
We will try our best to scrape up some info that will hopefully help
diagnose the problem, but since we have already crashed,
something is definitely wrong and this may fail.
Please help us make Percona XtraDB Cluster better by reporting any
bugs at https://bugs.launchpad.net/percona-xtradb-cluster

key_buffer_size=8388608
read_buffer_size=131072
max_used_connections=634
max_threads=2050
thread_count=18
connection_count=1
It is possible that mysqld could use up to
key_buffer_size + (read_buffer_size + sort_buffer_size)*max_threads = 826591 K bytes of memory
Hope that’s ok; if not, decrease some variables in the equation.

Thread pointer: 0x65f8e9f0
Attempting backtrace. You can use the following information to find out
where mysqld died. If you see no messages after this, something went
terribly wrong…
stack_bottom = 7f00a16b8d38 thread_stack 0x40000
/usr/sbin/mysqld(my_print_stacktrace+0x35)[0x8fa965]
/usr/sbin/mysqld(handle_fatal_signal+0x4b4)[0x665644]
/lib64/libpthread.so.0(+0xf710)[0x7f1812ff0710]
[0x7eff580000c8]

Trying to get some variables.
Some pointers may be invalid and cause the dump to abort.
Query (0): is an invalid pointer
Connection ID (thread ID): 240147
Status: KILL_CONNECTION

You may download the Percona XtraDB Cluster operations manual by visiting
http://www.percona.com/software/percona-xtradb-cluster/. You may find information
in the manual which will help you identify the cause of the crash.
150326 04:50:06 mysqld_safe Number of processes running now: 0
150326 04:50:06 mysqld_safe WSREP: not restarting wsrep node automatically
150326 04:50:06 mysqld_safe mysqld from pid file /var/lib/mysql/XXXXXXXXXXXXXXX.pid ended

mysql backup log

log scanned up to (419166230342)
log scanned up to (419166242232)
[04] …done
xtrabackup: Creating suspend file ‘XXXXXXXXXXXXXX/xtrabackup/tempbackup/xtrabackup_suspended_2’ with pid ‘834130’
log scanned up to (419166256114)

150326 04:49:56 innobackupex: Continuing after ibbackup has suspended
150326 04:49:56 innobackupex: Executing LOCK TABLES FOR BACKUP…
150326 04:49:56 innobackupex: Backup tables lock acquired

150326 04:49:56 innobackupex: Starting to backup non-InnoDB tables and files
innobackupex: in subdirectories of ‘/var/lib/mysql/’
innobackupex: Backing up files ‘/var/lib/mysql//mysql/*.{frm,isl,MYD,MYI,MAD,MAI,MRG,TRG,TRN,ARM,ARZ,CSM,CSV,opt,par}’ (74 files)

log scanned up to (419166337626)
log scanned up to (419166341849)
innobackupex: Backing up files ‘/var/lib/mysql//XXXXXXX/.{frm,isl,MYD,MYI,MAD,MAI,MRG,TRG,TRN,ARM,ARZ,CSM,CSV,opt,par}’ (69 files)
log scanned up to (419166341849)
log scanned up to (419166341849)
log scanned up to (419166341849)
log scanned up to (419166341849)
innobackupex: Backing up file ‘/var/lib/mysql//wsrep/db.opt’
innobackupex: Backing up file ‘/var/lib/mysql//wsrep/membership.frm’
innobackupex: Backing up file ‘/var/lib/mysql//wsrep/status.frm’
innobackupex: Backing up files '/var/lib/mysql//XXXXXXX/
.{frm,isl,MYD,MYI,MAD,MAI,MRG,TRG,TRN,ARM,ARZ,CSM,CSV,opt,par}’ (119 files)
log scanned up to (419166341849)
log scanned up to (419166341849)
innobackupex: Backing up files ‘/var/lib/mysql//performance_schema/*.{frm,isl,MYD,MYI,MAD,MAI,MRG,TRG,TRN,ARM,ARZ,CSM,CSV,opt,par}’ (53 files)
150326 04:50:04 innobackupex: Finished backing up non-InnoDB tables and files

150326 04:50:04 innobackupex: Executing LOCK BINLOG FOR BACKUP…
DBD::mysql::db do failed: Deadlock found when trying to get lock; try restarting transaction at /usr/bin/innobackupex line 3045.
innobackupex: got a fatal error with the following stacktrace: at /usr/bin/innobackupex line 3048
main::mysql_query(‘HASH(0x1843178)’, ‘LOCK BINLOG FOR BACKUP’) called at /usr/bin/innobackupex line 3517
main::mysql_lock_binlog(‘HASH(0x1843178)’) called at /usr/bin/innobackupex line 2009
main::backup() called at /usr/bin/innobackupex line 1601
innobackupex: Error:
Error executing ‘LOCK BINLOG FOR BACKUP’: DBD::mysql::db do failed: Deadlock found when trying to get lock; try restarting transaction at /usr/bin/innobackupex line 3045.
150326 04:50:04 innobackupex: Waiting for ibbackup (pid=834130) to finish
03/26/2015 04:51:50

The only way I found is to stop the second mysql node of the cluster during the backup

Hope you could provide us with a solution quickly.

Xibu,

Yes, there is a workaround where you may disable the LOCK BINLOG FOR BACKUP feature, so old way FTWRL will be used instead.
You have to upgrade Percona Xtrabackup to at least 2.2.9 and add this to the my.cnf:
[sst]
inno-backup-opts=‘–no-backup-locks’

This was mentioned in comment #7 in [url]https://bugs.launchpad.net/percona-xtradb-cluster/+bug/1401133[/url]