Not the answer you need?
Register and ask your own question!

Percona Server 5.5.29 crashing when replicating

jonnythejapjonnythejap EntrantCurrent User Role Participant
Hi there,

I am running a simple master -> slave replication to just a single node. I did an update to an application which added in some triggers and then attempted to start replication back up with taking a copy from the master with innobackupex, applied the logs, and then copied it over and set it up. Replication will run for a little bit, catching up, but before too long it will crash bringing down the slave database service.

The error log is as follows that relates to the crash:

130526 12:29:21 InnoDB: Assertion failure in thread 140087775872768 in file btr0cur.c line 363
InnoDB: Failing assertion: btr_page_get_next(get_block->frame, mtr) == page_get_page_no(page)
InnoDB: We intentionally generate a memory trap.
InnoDB: Submit a detailed bug report to
InnoDB: If you get repeated assertion failures or crashes, even
InnoDB: immediately after the mysqld startup, there may be
InnoDB: corruption in the InnoDB tablespace. Please refer to
InnoDB: about forcing recovery.
17:29:21 UTC - mysqld got signal 6 ;
This could be because you hit a bug. It is also possible that this binary
or one of the libraries it was linked against is corrupt, improperly built,
or misconfigured. This error can also be caused by malfunctioning hardware.
We will try our best to scrape up some info that will hopefully help
diagnose the problem, but since we have already crashed,
something is definitely wrong and this may fail.
Please help us make Percona Server better by reporting any
bugs at

It is possible that mysqld could use up to
key_buffer_size + (read_buffer_size + sort_buffer_size)*max_threads = 1564548 K bytes of memory
Hope that's ok; if not, decrease some variables in the equation.

Thread pointer: 0x0

Attempting backtrace. You can use the following information to find out
where mysqld died. If you see no messages after this, something went
terribly wrong...

stack_bottom = 0 thread_stack 0x40000

You may download the Percona Server operations manual by visiting You may find information
in the manual which will help you identify the cause of the crash.

Anyone have any thoughts?


  • jonnythejapjonnythejap Entrant Current User Role Participant
    One thing I noticed on this last run of trying also is that, where as there is no errors on the master server, and no corrupted tables as far as mysqlcheck is concerned, the slave is having a bunch of messages of these type:

    130526 15:47:58 InnoDB: Error: page 37755 log sequence number 1425377063093
    InnoDB: is in the future! Current system log sequence number 790471970292.
    InnoDB: Your database may be corrupt or you may have copied the InnoDB
    InnoDB: tablespace but not the InnoDB log files. See
    InnoDB: for more information.

    Reading about how to fix it, the only thing I've really heard is dump the database and restore, and given the size of our db is around 50gigs, it would incur a downtime of over four hours on our production system which we can't afford to do. Is the eventual crash associated to the being in the future errors? And is there a way to fix this without bringing down the existing site or dumping in massive amounts of data?
  • niljoshiniljoshi MySQL Sage Inactive User Role Leader

    Probably it's bug.

    Found couple of bugs with this assertion failure, first twp leads to that percona bug above, other ended up nowhere, but with table corruption assumed. Can you make some repeatable test case for the same? You can also try to use latest version of Percona Server / xtrabackup and check. It would be helpful if you can provide my.cnf too.
Sign In or Register to comment.

MySQL, InnoDB, MariaDB and MongoDB are trademarks of their respective owners.
Copyright ©2005 - 2020 Percona LLC. All rights reserved.