2 Nodes are going out of sync in 3 Nodes Percona XtraDB cluster set up at the time of re syncing:
Requesting you Guys in helping to re sync all 3 nodes.
Thanks you…
Used RPMS:
Percona-XtraDB-Cluster-shared-5.5.27-23.6.356.rhel6.x86_64
Percona-XtraDB-Cluster-server-5.5.27-23.6.356.rhel6.x86_64
percona-release-0.0-1.x86_64
Percona-XtraDB-Cluster-client-5.5.27-23.6.356.rhel6.x86_64
Percona-XtraDB-Cluster-galera-2.0-1.114.rhel6.x86_64
percona-xtrabackup-2.0.3-470.rhel6.x86_64
OS: CentOS release 6.3 (Final)
Environment: Virtual Systems.
Here is the mysql-error log from all 3 nodes:
Node 2: which is up
WSREP: FK key len exceeded 0 4294967295 3500
131227 2:58:46 [ERROR] WSREP: FK key set failed: 11
WSREP: FK key append failed
Node 3: is down
131227 5:00:11 [Note] WSREP: sst_donor_thread signaled with 0
131227 5:00:11 [Note] WSREP: Flushing tables for SST…
131227 5:00:11 [Note] WSREP: Provider paused at cf67b4da-6ea7-11e3-0800-7176739bc3d8:261
131227 5:00:11 [Note] WSREP: Tables flushed.
InnoDB: Warning: a long semaphore wait:
–Thread 139738020943616 has waited at trx0rseg.ic line 46 for 241.00 seconds the semaphore:
X-lock (wait_ex) on RW-latch at 0x7f177f07a6b8 ‘&block->lock’
a writer (thread id 139738020943616) has reserved it in mode wait exclusive
number of readers 1, waiters flag 0, lock_word: ffffffffffffffff
Last time read locked in file buf0flu.c line 1319
Last time write locked in file /home/jenkins/workspace/percona-xtradb-cluster-rpms/label_exp/centos6-64/target/BUILD/Percona-XtraDB-Cluster-5.5.27/Percona-XtraDB-Cluster-5.5.27/storage/innobase/include/trx0rseg.ic line 46
InnoDB: ###### Starts InnoDB Monitor for 30 secs to print diagnostic info:
SEMAPHORES
OS WAIT ARRAY INFO: reservation count 46, signal count 44
–Thread 139738020943616 has waited at trx0rseg.ic line 46 for 271.00 seconds the semaphore:
X-lock (wait_ex) on RW-latch at 0x7f177f07a6b8 ‘&block->lock’
a writer (thread id 139738020943616) has reserved it in mode wait exclusive
number of readers 1, waiters flag 0, lock_word: ffffffffffffffff
Last time read locked in file buf0flu.c line 1319
Last time write locked in file /home/jenkins/workspace/percona-xtradb-cluster-rpms/label_exp/centos6-64/target/BUILD/Percona-XtraDB-Cluster-5.5.27/Percona-XtraDB-Cluster-5.5.27/storage/innobase/include/trx0rseg.ic line 46
Mutex spin waits 38, rounds 925, OS waits 30
RW-shared spins 15, rounds 432, OS waits 14
RW-excl spins 1, rounds 60, OS waits 2
Spin rounds per wait: 24.34 mutex, 28.80 RW-shared, 60.00 RW-excl
RANSACTIONS
Trx id counter A0E406071
Purge done for trx’s n:o < A0E40606E undo n:o < 0
History list length 618
LIST OF TRANSACTIONS FOR EACH SESSION:
—TRANSACTION A0E40606E, not started
MySQL thread id 3, OS thread handle 0x7f174a757700, query id 2974 committed 260
—TRANSACTION A0E406070, not started
MySQL thread id 1, OS thread handle 0x7f1b16edb700, query id 2976 committed 261
END OF INNODB MONITOR OUTPUT
InnoDB: ###### Diagnostic info printed to the standard error stream
InnoDB: Warning: a long semaphore wait:
–Thread 139738020943616 has waited at trx0rseg.ic line 46 for 303.00 seconds the semaphore:
X-lock (wait_ex) on RW-latch at 0x7f177f07a6b8 ‘&block->lock’
a writer (thread id 139738020943616) has reserved it in mode wait exclusive
number of readers 1, waiters flag 0, lock_word: ffffffffffffffff
Last time read locked in file buf0flu.c line 1319
Last time write locked in file /home/jenkins/workspace/percona-xtradb-cluster-rpms/label_exp/centos6-64/target/BUILD/Percona-XtraDB-Cluster-5.5.27/Percona-XtraDB-Cluster-5.5.27/storage/innoba
se/include/trx0rseg.ic line 46
InnoDB: ###### Starts InnoDB Monitor for 30 secs to print diagnostic info:
InnoDB: Pending preads 0, pwrites 0
Node 1: down
131227 4:49:46 [Note] WSREP: 1 (Node3): State transfer from 0 (Node1) complete.
131227 4:49:46 [Note] WSREP: Member 1 (Node3) synced with group.
05:00:03 UTC - mysqld got signal 11 ;
This could be because you hit a bug. It is also possible that this binary
or one of the libraries it was linked against is corrupt, improperly built,
or misconfigured. This error can also be caused by malfunctioning hardware.
We will try our best to scrape up some info that will hopefully help
diagnose the problem, but since we have already crashed,
something is definitely wrong and this may fail.
Please help us make Percona Server better by reporting any
bugs at [url]System Dashboard - Percona JIRA
131227 5:11:34 [Note] WSREP: New COMPONENT: primary = yes, bootstrap = no, my_idx = 0, memb_num = 2
131227 5:11:34 [Note] WSREP: forgetting 49cd72df-6eb2-11e3-0800-3db8fd926ddb (tcp://XXX.XXX.XXX.53-Node3:4567)
131227 5:11:34 [Note] WSREP: (bf5de37d-6eb3-11e3-0800-1b8b698cefc9, ‘tcp://0.0.0.0:4567’) turning message relay requesting off
131227 5:11:34 [Note] WSREP: STATE_EXCHANGE: sent state UUID: 5ac327af-6eb5-11e3-0800-8a7f196d2532
131227 5:11:34 [Note] WSREP: STATE EXCHANGE: sent state msg: 5ac327af-6eb5-11e3-0800-8a7f196d2532
131227 5:11:34 [Note] WSREP: STATE EXCHANGE: got state msg: 5ac327af-6eb5-11e3-0800-8a7f196d2532 from 0 (Node1)
131227 5:11:34 [Note] WSREP: STATE EXCHANGE: got state msg: 5ac327af-6eb5-11e3-0800-8a7f196d2532 from 1 (Node2)
131227 5:11:34 [Note] WSREP: Quorum results:
version = 2,
component = PRIMARY,
conf_id = 4,
members = 1/2 (joined/total),
act_id = 864,
last_appl. = 835,
protocols = 0/4/2 (gcs/repl/appl),
group UUID = cf67b4da-6ea7-11e3-0800-7176739bc3d8
131227 5:11:34 [Warning] WSREP: Donor 49cd72df-6eb2-11e3-0800-3db8fd926ddb is no longer in the group. State transfer cannot be completed, need to abort. Aborting…
131227 5:11:34 [Note] WSREP: /usr/sbin/mysqld: Terminated.
131227 05:11:34 mysqld_safe mysqld from pid file /mnt/data//Node1.pid ended
131227 5:24:10 [Note] WSREP: Assign initial position for certification: 960, protocol version: 2
131227 5:24:10 [Warning] WSREP: Failed to prepare for incremental state transfer: Local state UUID (00000000-0000-0000-0000-000000000000) does not match group state UUID (cf67b4da-6ea7-11e3-