Xtrabackup 8.0.14 error while restoring

Hi all,

I’m using Percona MySQL 8.0.20 running on Ubuntu 18.04 on an AWS ec2 m5.2xlarge instance. The Xtrabackup version being used is 8.0.14.

While testing the restore process during the prepare stage, I’m getting the error as listed in this pastebin link: https://pastebin.pl/view/74b1832d

$ time xtrabackup --prepare --parallel=10 --use-memory=20G --apply-log-only --target-dir=/var/backups/mysql/2020-10-04_10-05-48/ >> /var/log/mysql/xtrabackup_restore.log 2>&

I’m also listing a snippet from the error listing -

Open file list len in shard 31 is 0

Open file list len in shard 32 is 9

InnoDB: Assertion failure: dict0dict.cc:1216:table2 == nullptr

InnoDB: thread 140008395342336InnoDB: We intentionally generate a memory trap.

InnoDB: Submit a detailed bug report to https://jira.percona.com/projects/PXB.

InnoDB: If you get repeated assertion failures or crashes, even

InnoDB: immediately after the mysqld startup, there may be

InnoDB: corruption in the InnoDB tablespace. Please refer to

InnoDB: http://dev.mysql.com/doc/refman/8.0/en/forcing-innodb-recovery.html

InnoDB: about forcing recovery.

15:30:33 UTC - mysqld got signal 6 ;

Most likely, you have hit a bug, but this error can also be caused by malfunctioning hardware.

Thread pointer: 0x55bc783d1800

Attempting backtrace. You can use the following information to find out

where mysqld died. If you see no messages after this, something went

terribly wrong…

stack_bottom = 7ffe5f6a38b0 thread_stack 0x46000

xtrabackup(my_print_stacktrace(unsigned char const*, unsigned long)+0x3d) [0x55bc736ec8dd]

xtrabackup(handle_fatal_signal+0x2fb) [0x55bc7255c32b]

/lib/x86_64-linux-gnu/libpthread.so.0(+0x128a0) [0x7f563e6918a0]

/lib/x86_64-linux-gnu/libc.so.6(gsignal+0xc7) [0x7f563c6eef47]

/lib/x86_64-linux-gnu/libc.so.6(abort+0x141) [0x7f563c6f08b1]

xtrabackup(ut_dbg_assertion_failed(char const*, char const*, unsigned long)+0xb2) [0x55bc729ddca2]

xtrabackup(dict_table_add_to_cache(dict_table_t*, unsigned long, mem_block_info_t*)+0x11c) [0x55bc726f28ac]

xtrabackup(dd_table_create_on_dd_obj(dd::Table const*, dd::Partition const*, std::__cxx11::basic_string<char, std::char_traits, Stateless_allocator<char, dd::String_type_alloc, My_free_functor> > const*, bool, unsigned int)+0x1938) [0x55bc72717238]

xtrabackup(+0x144cc65) [0x55bc72718c65]

xtrabackup(dd_table_load_on_dd_obj(dd::cache::Dictionary_client*, unsigned int, dd::Table const&, dict_table_t*&, THD*, std::__cxx11::basic_string<char, std::char_traits, Stateless_allocator<char, dd::String_type_alloc, My_free_functor> > const*, bool)+0x53) [0x55bc72718cd3]

xtrabackup(dict_load_tables_from_space_id(unsigned int, THD*, trx_t*)+0x665) [0x55bc72042155]

xtrabackup(+0xd766c3) [0x55bc720426c3]

xtrabackup(+0xd7bfe0) [0x55bc72047fe0]

xtrabackup(main+0xd37) [0x55bc72006787]

/lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xe7) [0x7f563c6d1b97]

xtrabackup(_start+0x2a) [0x55bc720344ca]

Trying to get some variables.

Some pointers may be invalid and cause the dump to abort.

Query (0): Connection ID (thread ID): 0


Please report a bug at https://jira.percona.com/projects/PXB\

Any pointers to this would be of great help.

Is this an incremental backup you are trying to prepare? The --apply-log-only flag is only used when doing incremental backups. How did you create this backup?


This seems to be related/caused by https://bugs.mysql.com/bug.php?id=98501

Do you have partitions and have you performed any operation on those while taking the backup?

Thanks Matthew and Marcelo.

Here are the steps that I perform while taking backups and restoring them:

full backup

xtrabackup --backup --parallel=2 --compress --compress-threads=2 --slave-info --target-dir=/backups/mysql/2020-10-01_10-05-00/

incremental backup

xtrabackup --backup --parallel=2 --compress --compress-threads=2 --slave-info --target-dir=/backups/mysql/2020-10-02_10-05-00/ --incremental --incremental-basedir=/backups/mysql/2020-10-01_10-05-00/

While restoring -

decompress full backup

xtrabackup --decompress --parallel=2 --use-memory=10G --target-dir=/backups/mysql/2020-10-01_10-05-00/

decompress incremental backup

xtrabackup --decompress --parallel=2 --use-memory=10G --target-dir=/backups/mysql/2020-10-02_10-05-00/

prepare the base backup

xtrabackup --prepare --parallel=2 --use-memory=10G --apply-log-only --target-dir=/backups/mysql/2020-10-01_10-05-00/ #getting error in this step

prepare the incremental backup

xtrabackup --prepare --parallel=2 --use-memory=10G --target-dir=/backups/mysql/2020-10-01_10-05-00/ --incremental-dir=/backups/mysql/2020-10-02_10-05-00/

We use partitioned tables and most of our stored procedures also use PARTITION EXCHANGE clause as part of our ETL.

As the link shared by Marcelo points out the issue with exchanging partitions, I tried doing an OPTIMIZE TABLE as well on my staging tables being used as part of partition-exchange but again while restoring I was greeted with the same error.

The backups are taken only after the entire ETL finishes of which the PARTITION EXCHANGE is a part.