InnoDB errors after state transfer

I have a problem with a four node cluster after a SST Transfer when a node has failed.

The facts:

  • Four node Percona XtraDB Cluster running version 5.5.29- on x86_64
  • Using CentOS 6.4
  • Both methods tried (rsync and xtrabackup)
  • Sometimes large imports will break cluster as well. One random node stops and after rejoin the same error as below is received.
  • Credentials for the transfer are correct in the whole cluster and the transfer works

When a node has joined the cluster again it requests a transfer, because it is out of sync. As far as I can see the transfer succeeds and the node has been joined the cluster again. When I try to select something from the transferred database I receive the following errors:

130125 15:57:16 InnoDB: Error: page 1 log sequence number 63541754
InnoDB: is in the future! Current system log sequence number 1611701.
InnoDB: Your database may be corrupt or you may have copied the InnoDB
InnoDB: tablespace but not the InnoDB log files. See
InnoDB: for more information.


130125 15:58:32 InnoDB: Error: table xxxx.xxxx does not exist in the InnoDB internal
InnoDB: data dictionary though MySQL is trying to drop it.
InnoDB: Have you copied the .frm file of the table to the
InnoDB: MySQL database directory from another database?
InnoDB: You can look for further help from


130125 16:21:38 InnoDB: Error: trying to open a table, but could not
InnoDB: open the tablespace file ‘./xxxx/xxxx.ibd’!
InnoDB: Have you moved InnoDB .ibd files around without using the
InnoDB: It is also possible that this is a temporary table #sql…,
InnoDB: and MySQL removed the .ibd file for this.
InnoDB: Please refer to
InnoDB: for how to resolve the issue.

which can result in:

InnoDB: Error: tablespace id is 4955 in the data dictionary
InnoDB: but in file ./xxxx/xxxx.ibd it is 4991!
130125 16:22:18 InnoDB: Assertion failure in thread 140335641454336 in file fil0fil.c line 776
InnoDB: We intentionally generate a memory trap.
InnoDB: Submit a detailed bug report to
InnoDB: If you get repeated assertion failures or crashes, even
InnoDB: immediately after the mysqld startup, there may be
InnoDB: corruption in the InnoDB tablespace. Please refer to
InnoDB: about forcing recovery.
15:22:18 UTC - mysqld got signal 6 ;
This could be because you hit a bug. It is also possible that this binary
or one of the libraries it was linked against is corrupt, improperly built,
or misconfigured. This error can also be caused by malfunctioning hardware.
We will try our best to scrape up some info that will hopefully help
diagnose the problem, but since we have already crashed,
something is definitely wrong and this may fail.
Please help us make Percona Server better by reporting any
bugs at

It is possible that mysqld could use up to
key_buffer_size + (read_buffer_size + sort_buffer_size)*max_threads = 628597 K bytes of memory
Hope that’s ok; if not, decrease some variables in the equation.

Thread pointer: 0xde7f370
Attempting backtrace. You can use the following information to find out
where mysqld died. If you see no messages after this, something went
terribly wrong…
stack_bottom = 7fa2700ede48 thread_stack 0x40000

Trying to get some variables.
Some pointers may be invalid and cause the dump to abort.
Query (7fa0a8004d70): is an invalid pointer
Connection ID (thread ID): 29

You may download the Percona Server operations manual by visiting You may find information
in the manual which will help you identify the cause of the crash.

I know these are all InnoDB errors which are the same if you hard copy files from one server to another, but I can’t figure it out why this happens in the cluster version.

Hope someone can help me out.
If more info is needed, please ask. It can be provided

Below the sync log, which looks find to me as far as I can see:

130125 14:19:37 [Note] WSREP: gcomm: connected
130125 14:19:37 [Note] WSREP: Changing maximum packet size to 64500, resulting msg size: 32636
130125 14:19:37 [Note] WSREP: Shifting CLOSED -> OPEN (TO: 0)
130125 14:19:37 [Note] WSREP: Opened channel ‘cluster-001’
130125 14:19:37 [Note] WSREP: New COMPONENT: primary = yes, bootstrap = no, my_idx = 2, memb_num = 4
130125 14:19:37 [Note] WSREP: Waiting for SST to complete.
130125 14:19:37 [Note] WSREP: STATE EXCHANGE: Waiting for state UUID.
130125 14:19:37 [Note] WSREP: STATE EXCHANGE: sent state msg: 95705119-66f1-11e2-0800-da2a3edb9a84
130125 14:19:37 [Note] WSREP: STATE EXCHANGE: got state msg: 95705119-66f1-11e2-0800-da2a3edb9a84 from 0 (server03)
130125 14:19:37 [Note] WSREP: STATE EXCHANGE: got state msg: 95705119-66f1-11e2-0800-da2a3edb9a84 from 1 (server04)
130125 14:19:37 [Note] WSREP: STATE EXCHANGE: got state msg: 95705119-66f1-11e2-0800-da2a3edb9a84 from 3 (server01)
130125 14:19:37 [Note] WSREP: STATE EXCHANGE: got state msg: 95705119-66f1-11e2-0800-da2a3edb9a84 from 2 (server02)
130125 14:19:37 [Note] WSREP: Quorum results:
version = 2,
component = PRIMARY,
conf_id = 17,
members = 3/4 (joined/total),
act_id = 3918,
last_appl. = -1,
protocols = 0/4/2 (gcs/repl/appl),
group UUID = 7a4c0d88-6631-11e2-0800-bd2796302ac7
130125 14:19:37 [Note] WSREP: Flow-control interval: [32, 32]
130125 14:19:37 [Note] WSREP: Shifting OPEN -> PRIMARY (TO: 3918)
130125 14:19:37 [Note] WSREP: State transfer required:
Group state: 7a4c0d88-6631-11e2-0800-bd2796302ac7:3918
Local state: 00000000-0000-0000-0000-000000000000:-1
130125 14:19:37 [Note] WSREP: New cluster view: global state: 7a4c0d88-6631-11e2-0800-bd2796302ac7:3918, view# 18: Primary, number of nodes: 4, my index: 2, protocol version 2
130125 14:19:37 [Warning] WSREP: Gap in state sequence. Need state transfer.
130125 14:19:39 [Note] WSREP: Running: ‘wsrep_sst_rsync --role ‘joiner’ --address ‘’ --auth ‘’ --datadir ‘/db/mysql/db/’ --defaults-file ‘/etc/my.cnf’
–parent ‘23002’’
130125 14:19:39 [Note] WSREP: Prepared SST request: rsync|
130125 14:19:39 [Note] WSREP: wsrep_notify_cmd is not defined, skipping notification.
130125 14:19:39 [Note] WSREP: Assign initial position for certification: 3918, protocol version: 2
130125 14:19:39 [Warning] WSREP: Failed to prepare for incremental state transfer: Local state UUID (00000000-0000-0000-0000-000000000000) does not match group state UUID (7a4c0d88-
6631-11e2-0800-bd2796302ac7): 1 (Operation not permitted)
at galera/src/replicator_str.cpp:prepare_for_IST():440. IST will be unavailable.
130125 14:19:39 [Note] WSREP: Node 2 (umbriel) requested state transfer from ‘any’. Selected 0 (hadar)(SYNCED) as donor.
130125 14:19:39 [Note] WSREP: Shifting PRIMARY -> JOINER (TO: 3918)
130125 14:19:39 [Note] WSREP: Requesting state transfer: success, donor: 0
130125 14:20:03 [Note] WSREP: 0 (hadar): State transfer to 2 (umbriel) complete.
130125 14:20:03 [Note] WSREP: Member 0 (hadar) synced with group.
WSREP_SST: [INFO] Joiner cleanup. (20130125 14:20:04.189)
130125 14:20:04 [Note] WSREP: SST complete, seqno: 3918
130125 14:20:04 [Note] Plugin ‘FEDERATED’ is disabled.
130125 14:20:04 [Warning] option ‘innodb-autoextend-increment’: unsigned value 16777216 adjusted to 1000
130125 14:20:04 InnoDB: The InnoDB memory heap is disabled
130125 14:20:04 InnoDB: Mutexes and rw_locks use GCC atomic builtins
130125 14:20:04 InnoDB: Compressed tables use zlib 1.2.3
130125 14:20:04 InnoDB: Using Linux native AIO
130125 14:20:04 InnoDB: Initializing buffer pool, size = 6.0G
InnoDB: HugeTLB: Warning: Failed to allocate 6618611712 bytes. errno 22
InnoDB HugeTLB: Warning: Using conventional memory pool
130125 14:20:05 InnoDB: Completed initialization of buffer pool
130125 14:20:05 InnoDB: highest supported file format is Barracuda.
130125 14:20:15 InnoDB: Waiting for the background threads to start
130125 14:20:16 Percona XtraDB ( 1.1.8-rel29.1 started; log sequence number 1598099
130125 14:20:16 [Note] Server hostname (bind-address): ‘’; port: 3306
130125 14:20:16 [Note] - ‘’ resolves to ‘’;
130125 14:20:16 [Note] Server socket created on IP: ‘’.
130125 14:20:16 [Note] Event Scheduler: Loaded 0 events
130125 14:20:16 [Note] WSREP: Signalling provider to continue.
130125 14:20:16 [Note] WSREP: SST received: 7a4c0d88-6631-11e2-0800-bd2796302ac7:3918
130125 14:20:16 [Note] /usr/sbin/mysqld: ready for connections.
Version: ‘5.5.28’ socket: ‘/var/lib/mysql/mysql.sock’ port: 3306 Percona XtraDB Cluster (GPL), wsrep_23.7.r3821
130125 14:20:16 [Note] WSREP: 2 (umbriel): State transfer from 0 (hadar) complete.
130125 14:20:16 [Note] WSREP: Shifting JOINER -> JOINED (TO: 3918)
130125 14:20:16 [Note] WSREP: Member 2 (umbriel) synced with group.
130125 14:20:16 [Note] WSREP: Shifting JOINED -> SYNCED (TO: 3918)
130125 14:20:16 [Note] WSREP: Synchronized with group, ready for connections
130125 14:20:16 [Note] WSREP: wsrep_notify_cmd is not defined, skipping notification.

Just as a note.

On a 2 node + 1 garb setup, one of my node always crashed with a trigger to a table with autoincrement. I have to

  • stop mysql service on failed node
  • rm -rf /var/lib/mysq/.
  • start mysql service

This is very interesting…

So, on the first message I saw that your MySQL database/schema has many missed tables and crashed by a SIGABRT, signal 6. At the second message, I’m seeing an interesting part:

The errno 22 refers to the allocation size and an invalid argument.

[bianchi@srvwb1027 ]$ perror 22
OS error code 22: Invalid argument

Take a look on the page below:…e-support.html

And check the availability to adjust the following values at sysctl.conf - not sure if this value will really fit as you can adjust it on your env needs…