Failure during prepare/apply-log process

I’m seeing the below error during the apply-log process. This had been working cleanly for some time. The backup is being taken on one host, and prepared on another. The only known change to configuration was splitting up the backup source’s configuration file into a my.cnf and server.cnf (sourced via globbing). The backup source was not restarted following the distribution of the new configuration files, as their contents are identical to the running configuration.

Any insight would be appreciated.

Backup taken using xtrabackup 2.1.2, attempting with 2.1.4 now
Backup prepared using both xtrabackup 2.1.2, 2.1.4
MariaDB 5.5.30 on both the backup source and restore target

Backup command:

/usr/bin/innobackupex-1.5.1
–defaults-file=/etc/my.cnf
–slave-info
–safe-slave-backup
–ibbackup=/usr/bin/xtrabackup
–stream=tar $MYSQL_DIR | gzip > $BACKUP_FILE

Prepare command (following decompress/untar):

/usr/bin/innobackupex-1.5.1
–use-memory=2GB
–ibbackup=/usr/bin/xtrabackup
–apply-log $MYSQL_DIR

xtrabackup Error:

InnoDB: Doing recovery: scanned up to log sequence number 29789176282624 (99 %)
InnoDB: Doing recovery: scanned up to log sequence number 29789181525504 (99 %)
InnoDB: Doing recovery: scanned up to log sequence number 29789186768384 (99 %)
InnoDB: Doing recovery: scanned up to log sequence number 29789192011264 (99 %)
InnoDB: Doing recovery: scanned up to log sequence number 29789197254144 (99 %)
InnoDB: Doing recovery: scanned up to log sequence number 29789198942250 (99 %)
130917 15:49:24 InnoDB: Starting an apply batch of log records to the database…
InnoDB: Progress in percents: 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99
InnoDB: Apply batch completed
InnoDB: In a MySQL replication slave the last master binlog file
InnoDB: position 241157890, file name mysql-binlog.000188
InnoDB: and relay log file
InnoDB: position 241158181, file name /mysql/logs/relay/mysqld-relay.000307
InnoDB: Last MySQL binlog file position 0 979086543, file name /mysql/logs/bin/mysql-binlog.003159
130917 15:49:55 Percona XtraDB (http://www.percona.com) 5.1.70-14.6 started; log sequence number 29789198942250
130917 15:49:57 InnoDB: Assertion failure in thread 140307253425920 in file innodb_int.cc line 807
InnoDB: Failing assertion: cset == 0
InnoDB: We intentionally generate a memory trap.
InnoDB: Submit a detailed bug report to http://bugs.mysql.com.
InnoDB: If you get repeated assertion failures or crashes, even
InnoDB: immediately after the mysqld startup, there may be
InnoDB: corruption in the InnoDB tablespace. Please refer to
InnoDB: [url]http://dev.mysql.com/doc/refman/5.1/en/forcing-innodb-recovery.html[/url]
InnoDB: about forcing recovery.
innobackupex-1.5.1: Error:
innobackupex-1.5.1: ibbackup failed at /usr/bin/innobackupex-1.5.1 line 416.

Why did you use “–ibbackup=/usr/bin/xtrabackup” parameter? Your version is 5.5.30, yet you ordered innobackupex to make and prepare the backup using ‘xtrabackup’ binary based on MySQL 5.1.70:

130917 15:49:55 Percona XtraDB ([URL]http://www.percona.com[/URL]) 5.1.70-14.6 started; log sequence number 29789198942250

You should use xtrabackup_55 binary here.

I came to that realization shortly after posting. I initially overlooked it because the same command had been working for a very long time. Curious, what would cause this to suddenly manifest itself?

e: Thanks, by the way. :slight_smile: