InnoDB errors after state transfer

I have a problem with a four node cluster after a SST Transfer when a node has failed.

The facts:

  • Four node Percona XtraDB Cluster running version 5.5.29-23.7.2.389.rhel6 on x86_64
  • Using CentOS 6.4
  • Both methods tried (rsync and xtrabackup)
  • Sometimes large imports will break cluster as well. One random node stops and after rejoin the same error as below is received.
  • Credentials for the transfer are correct in the whole cluster and the transfer works

When a node has joined the cluster again it requests a transfer, because it is out of sync. As far as I can see the transfer succeeds and the node has been joined the cluster again. When I try to select something from the transferred database I receive the following errors:

130125 15:57:16 InnoDB: Error: page 1 log sequence number 63541754
InnoDB: is in the future! Current system log sequence number 1611701.
InnoDB: Your database may be corrupt or you may have copied the InnoDB
InnoDB: tablespace but not the InnoDB log files. See
InnoDB: [url]http://dev.mysql.com/doc/refman/5.5/en/forcing-innodb-recovery.html[/url]
InnoDB: for more information.

or

130125 15:58:32 InnoDB: Error: table xxxx.xxxx does not exist in the InnoDB internal
InnoDB: data dictionary though MySQL is trying to drop it.
InnoDB: Have you copied the .frm file of the table to the
InnoDB: MySQL database directory from another database?
InnoDB: You can look for further help from
InnoDB: [url]http://dev.mysql.com/doc/refman/5.5/en/innodb-troubleshooting.html[/url]

or

130125 16:21:38 InnoDB: Error: trying to open a table, but could not
InnoDB: open the tablespace file ‘./xxxx/xxxx.ibd’!
InnoDB: Have you moved InnoDB .ibd files around without using the
InnoDB: commands DISCARD TABLESPACE and IMPORT TABLESPACE?
InnoDB: It is also possible that this is a temporary table #sql…,
InnoDB: and MySQL removed the .ibd file for this.
InnoDB: Please refer to
InnoDB: [url]http://dev.mysql.com/doc/refman/5.5/en/innodb-troubleshooting-datadict.html[/url]
InnoDB: for how to resolve the issue.

which can result in:

InnoDB: Error: tablespace id is 4955 in the data dictionary
InnoDB: but in file ./xxxx/xxxx.ibd it is 4991!
130125 16:22:18 InnoDB: Assertion failure in thread 140335641454336 in file fil0fil.c line 776
InnoDB: We intentionally generate a memory trap.
InnoDB: Submit a detailed bug report to http://bugs.mysql.com.
InnoDB: If you get repeated assertion failures or crashes, even
InnoDB: immediately after the mysqld startup, there may be
InnoDB: corruption in the InnoDB tablespace. Please refer to
InnoDB: [url]http://dev.mysql.com/doc/refman/5.5/en/forcing-innodb-recovery.html[/url]
InnoDB: about forcing recovery.
15:22:18 UTC - mysqld got signal 6 ;
This could be because you hit a bug. It is also possible that this binary
or one of the libraries it was linked against is corrupt, improperly built,
or misconfigured. This error can also be caused by malfunctioning hardware.
We will try our best to scrape up some info that will hopefully help
diagnose the problem, but since we have already crashed,
something is definitely wrong and this may fail.
Please help us make Percona Server better by reporting any
bugs at [url]System Dashboard - Percona JIRA

key_buffer_size=8388608
read_buffer_size=2097152
max_used_connections=12
max_threads=151
thread_count=10
connection_count=10
It is possible that mysqld could use up to
key_buffer_size + (read_buffer_size + sort_buffer_size)*max_threads = 628597 K bytes of memory
Hope that’s ok; if not, decrease some variables in the equation.

Thread pointer: 0xde7f370
Attempting backtrace. You can use the following information to find out
where mysqld died. If you see no messages after this, something went
terribly wrong…
stack_bottom = 7fa2700ede48 thread_stack 0x40000
/usr/sbin/mysqld(my_print_stacktrace+0x35)[0x7c83f5]
/usr/sbin/mysqld(handle_fatal_signal+0x4a4)[0x6a1e04]
/lib64/libpthread.so.0(+0xf500)[0x7fa28eb5f500]
/lib64/libc.so.6(gsignal+0x35)[0x7fa28dd278a5]
/lib64/libc.so.6(abort+0x175)[0x7fa28dd29085]
/usr/sbin/mysqld[0x90267e]
/usr/sbin/mysqld[0x902799]
/usr/sbin/mysqld[0x90ea7d]
/usr/sbin/mysqld[0x90eb89]
/usr/sbin/mysqld[0x8e4629]
/usr/sbin/mysqld[0x878443]
/usr/sbin/mysqld[0x878e01]
/usr/sbin/mysqld[0x967978]
/usr/sbin/mysqld[0x9680c6]
/usr/sbin/mysqld[0x865977]
/usr/sbin/mysqld[0x848ab9]
/usr/sbin/mysqld(_Z15ha_delete_tableP3THDP10handlertonPKcS4_S4_b+0xae)[0x6a74de]
/usr/sbin/mysqld(_Z23mysql_rm_table_no_locksP3THDP10TABLE_LISTbbbb+0x824)[0x5f8474]
/usr/sbin/mysqld(_Z14mysql_rm_tableP3THDP10TABLE_LISTcc+0x15d)[0x5f8b7d]
/usr/sbin/mysqld(_Z21mysql_execute_commandP3THD+0xafd)[0x59425d]
/usr/sbin/mysqld(_Z11mysql_parseP3THDPcjP12Parser_state+0x333)[0x598de3]
/usr/sbin/mysqld[0x5997a0]
/usr/sbin/mysqld(_Z16dispatch_command19enum_server_commandP3THDPcj+0x1a82)[0x59c072]
/usr/sbin/mysqld(_Z10do_commandP3THD+0x169)[0x59c499]
/usr/sbin/mysqld(_Z24do_handle_one_connectionP3THD+0x15f)[0x63710f]
/usr/sbin/mysqld(handle_one_connection+0x51)[0x6372d1]
/lib64/libpthread.so.0(+0x7851)[0x7fa28eb57851]
/lib64/libc.so.6(clone+0x6d)[0x7fa28dddd11d]

Trying to get some variables.
Some pointers may be invalid and cause the dump to abort.
Query (7fa0a8004d70): is an invalid pointer
Connection ID (thread ID): 29
Status: NOT_KILLED

You may download the Percona Server operations manual by visiting
[url]http://www.percona.com/software/percona-server/[/url]. You may find information
in the manual which will help you identify the cause of the crash.

I know these are all InnoDB errors which are the same if you hard copy files from one server to another, but I can’t figure it out why this happens in the cluster version.

Hope someone can help me out.
If more info is needed, please ask. It can be provided

Below the sync log, which looks find to me as far as I can see:

130125 14:19:37 [Note] WSREP: gcomm: connected
130125 14:19:37 [Note] WSREP: Changing maximum packet size to 64500, resulting msg size: 32636
130125 14:19:37 [Note] WSREP: Shifting CLOSED → OPEN (TO: 0)
130125 14:19:37 [Note] WSREP: Opened channel ‘cluster-001’
130125 14:19:37 [Note] WSREP: New COMPONENT: primary = yes, bootstrap = no, my_idx = 2, memb_num = 4
130125 14:19:37 [Note] WSREP: Waiting for SST to complete.
130125 14:19:37 [Note] WSREP: STATE EXCHANGE: Waiting for state UUID.
130125 14:19:37 [Note] WSREP: STATE EXCHANGE: sent state msg: 95705119-66f1-11e2-0800-da2a3edb9a84
130125 14:19:37 [Note] WSREP: STATE EXCHANGE: got state msg: 95705119-66f1-11e2-0800-da2a3edb9a84 from 0 (server03)
130125 14:19:37 [Note] WSREP: STATE EXCHANGE: got state msg: 95705119-66f1-11e2-0800-da2a3edb9a84 from 1 (server04)
130125 14:19:37 [Note] WSREP: STATE EXCHANGE: got state msg: 95705119-66f1-11e2-0800-da2a3edb9a84 from 3 (server01)
130125 14:19:37 [Note] WSREP: STATE EXCHANGE: got state msg: 95705119-66f1-11e2-0800-da2a3edb9a84 from 2 (server02)
130125 14:19:37 [Note] WSREP: Quorum results:
version = 2,
component = PRIMARY,
conf_id = 17,
members = 3/4 (joined/total),
act_id = 3918,
last_appl. = -1,
protocols = 0/4/2 (gcs/repl/appl),
group UUID = 7a4c0d88-6631-11e2-0800-bd2796302ac7
130125 14:19:37 [Note] WSREP: Flow-control interval: [32, 32]
130125 14:19:37 [Note] WSREP: Shifting OPEN → PRIMARY (TO: 3918)
130125 14:19:37 [Note] WSREP: State transfer required:
Group state: 7a4c0d88-6631-11e2-0800-bd2796302ac7:3918
Local state: 00000000-0000-0000-0000-000000000000:-1
130125 14:19:37 [Note] WSREP: New cluster view: global state: 7a4c0d88-6631-11e2-0800-bd2796302ac7:3918, view# 18: Primary, number of nodes: 4, my index: 2, protocol version 2
130125 14:19:37 [Warning] WSREP: Gap in state sequence. Need state transfer.
130125 14:19:39 [Note] WSREP: Running: ‘wsrep_sst_rsync --role ‘joiner’ --address ‘xxx.xxx.xxx.xxx:23306’ --auth ‘’ --datadir ‘/db/mysql/db/’ --defaults-file ‘/etc/my.cnf’
–parent ‘23002’’
130125 14:19:39 [Note] WSREP: Prepared SST request: rsync|10.34.17.46:23306/rsync_sst
130125 14:19:39 [Note] WSREP: wsrep_notify_cmd is not defined, skipping notification.
130125 14:19:39 [Note] WSREP: Assign initial position for certification: 3918, protocol version: 2
130125 14:19:39 [Warning] WSREP: Failed to prepare for incremental state transfer: Local state UUID (00000000-0000-0000-0000-000000000000) does not match group state UUID (7a4c0d88-
6631-11e2-0800-bd2796302ac7): 1 (Operation not permitted)
at galera/src/replicator_str.cpp:prepare_for_IST():440. IST will be unavailable.
130125 14:19:39 [Note] WSREP: Node 2 (umbriel) requested state transfer from ‘any’. Selected 0 (hadar)(SYNCED) as donor.
130125 14:19:39 [Note] WSREP: Shifting PRIMARY → JOINER (TO: 3918)
130125 14:19:39 [Note] WSREP: Requesting state transfer: success, donor: 0
130125 14:20:03 [Note] WSREP: 0 (hadar): State transfer to 2 (umbriel) complete.
130125 14:20:03 [Note] WSREP: Member 0 (hadar) synced with group.
WSREP_SST: [INFO] Joiner cleanup. (20130125 14:20:04.189)
done.
130125 14:20:04 [Note] WSREP: SST complete, seqno: 3918
130125 14:20:04 [Note] Plugin ‘FEDERATED’ is disabled.
130125 14:20:04 [Warning] option ‘innodb-autoextend-increment’: unsigned value 16777216 adjusted to 1000
130125 14:20:04 InnoDB: The InnoDB memory heap is disabled
130125 14:20:04 InnoDB: Mutexes and rw_locks use GCC atomic builtins
130125 14:20:04 InnoDB: Compressed tables use zlib 1.2.3
130125 14:20:04 InnoDB: Using Linux native AIO
130125 14:20:04 InnoDB: Initializing buffer pool, size = 6.0G
InnoDB: HugeTLB: Warning: Failed to allocate 6618611712 bytes. errno 22
InnoDB HugeTLB: Warning: Using conventional memory pool
130125 14:20:05 InnoDB: Completed initialization of buffer pool
130125 14:20:05 InnoDB: highest supported file format is Barracuda.
130125 14:20:15 InnoDB: Waiting for the background threads to start
130125 14:20:16 Percona XtraDB (http://www.percona.com) 1.1.8-rel29.1 started; log sequence number 1598099
130125 14:20:16 [Note] Server hostname (bind-address): ‘0.0.0.0’; port: 3306
130125 14:20:16 [Note] - ‘0.0.0.0’ resolves to ‘0.0.0.0’;
130125 14:20:16 [Note] Server socket created on IP: ‘0.0.0.0’.
130125 14:20:16 [Note] Event Scheduler: Loaded 0 events
130125 14:20:16 [Note] WSREP: Signalling provider to continue.
130125 14:20:16 [Note] WSREP: SST received: 7a4c0d88-6631-11e2-0800-bd2796302ac7:3918
130125 14:20:16 [Note] /usr/sbin/mysqld: ready for connections.
Version: ‘5.5.28’ socket: ‘/var/lib/mysql/mysql.sock’ port: 3306 Percona XtraDB Cluster (GPL), wsrep_23.7.r3821
130125 14:20:16 [Note] WSREP: 2 (umbriel): State transfer from 0 (hadar) complete.
130125 14:20:16 [Note] WSREP: Shifting JOINER → JOINED (TO: 3918)
130125 14:20:16 [Note] WSREP: Member 2 (umbriel) synced with group.
130125 14:20:16 [Note] WSREP: Shifting JOINED → SYNCED (TO: 3918)
130125 14:20:16 [Note] WSREP: Synchronized with group, ready for connections
130125 14:20:16 [Note] WSREP: wsrep_notify_cmd is not defined, skipping notification.

Just as a note.

On a 2 node + 1 garb setup, one of my node always crashed with a trigger to a table with autoincrement. I have to

  • stop mysql service on failed node
  • rm -rf /var/lib/mysq/.
  • start mysql service

This is very interesting…

So, on the first message I saw that your MySQL database/schema has many missed tables and crashed by a SIGABRT, signal 6. At the second message, I’m seeing an interesting part:

The errno 22 refers to the allocation size and an invalid argument.

[bianchi@srvwb1027 ]$ perror 22
OS error code 22: Invalid argument

Take a look on the page below:
https://dev.mysql.com/doc/refman/5.5…e-support.html

And check the availability to adjust the following values at sysctl.conf - not sure if this value will really fit as you can adjust it on your env needs…