Xtrabackup 8.0.14 error while restoring

Hi all,

I’m using Percona MySQL 8.0.20 running on Ubuntu 18.04 on an AWS ec2 m5.2xlarge instance. The Xtrabackup version being used is 8.0.14.

While testing the restore process during the prepare stage, I’m getting the error as listed in this pastebin link: https://pastebin.pl/view/74b1832d

$ time xtrabackup --prepare --parallel=10 --use-memory=20G --apply-log-only --target-dir=/var/backups/mysql/2020-10-04_10-05-48/ >> /var/log/mysql/xtrabackup_restore.log 2>&

I’m also listing a snippet from the error listing -


Open file list len in shard 31 is 0

Open file list len in shard 32 is 9

InnoDB: Assertion failure: dict0dict.cc:1216:table2 == nullptr

InnoDB: thread 140008395342336InnoDB: We intentionally generate a memory trap.

InnoDB: Submit a detailed bug report to https://jira.percona.com/projects/PXB.

InnoDB: If you get repeated assertion failures or crashes, even

InnoDB: immediately after the mysqld startup, there may be

InnoDB: corruption in the InnoDB tablespace. Please refer to

InnoDB: http://dev.mysql.com/doc/refman/8.0/en/forcing-innodb-recovery.html

InnoDB: about forcing recovery.

15:30:33 UTC - mysqld got signal 6 ;

Most likely, you have hit a bug, but this error can also be caused by malfunctioning hardware.

Thread pointer: 0x55bc783d1800

Attempting backtrace. You can use the following information to find out

where mysqld died. If you see no messages after this, something went

terribly wrong…

stack_bottom = 7ffe5f6a38b0 thread_stack 0x46000

xtrabackup(my_print_stacktrace(unsigned char const*, unsigned long)+0x3d) [0x55bc736ec8dd]

xtrabackup(handle_fatal_signal+0x2fb) [0x55bc7255c32b]

/lib/x86_64-linux-gnu/libpthread.so.0(+0x128a0) [0x7f563e6918a0]

/lib/x86_64-linux-gnu/libc.so.6(gsignal+0xc7) [0x7f563c6eef47]

/lib/x86_64-linux-gnu/libc.so.6(abort+0x141) [0x7f563c6f08b1]

xtrabackup(ut_dbg_assertion_failed(char const*, char const*, unsigned long)+0xb2) [0x55bc729ddca2]

xtrabackup(dict_table_add_to_cache(dict_table_t*, unsigned long, mem_block_info_t*)+0x11c) [0x55bc726f28ac]

xtrabackup(dd_table_create_on_dd_obj(dd::Table const*, dd::Partition const*, std::__cxx11::basic_string<char, std::char_traits, Stateless_allocator<char, dd::String_type_alloc, My_free_functor> > const*, bool, unsigned int)+0x1938) [0x55bc72717238]

xtrabackup(+0x144cc65) [0x55bc72718c65]

xtrabackup(dd_table_load_on_dd_obj(dd::cache::Dictionary_client*, unsigned int, dd::Table const&, dict_table_t*&, THD*, std::__cxx11::basic_string<char, std::char_traits, Stateless_allocator<char, dd::String_type_alloc, My_free_functor> > const*, bool)+0x53) [0x55bc72718cd3]

xtrabackup(dict_load_tables_from_space_id(unsigned int, THD*, trx_t*)+0x665) [0x55bc72042155]

xtrabackup(+0xd766c3) [0x55bc720426c3]

xtrabackup(+0xd7bfe0) [0x55bc72047fe0]

xtrabackup(main+0xd37) [0x55bc72006787]

/lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xe7) [0x7f563c6d1b97]

xtrabackup(_start+0x2a) [0x55bc720344ca]

Trying to get some variables.

Some pointers may be invalid and cause the dump to abort.

Query (0): Connection ID (thread ID): 0

Status: NOT_KILLED

Please report a bug at https://jira.percona.com/projects/PXB\


Any pointers to this would be of great help.

1 Like

Is this an incremental backup you are trying to prepare? The --apply-log-only flag is only used when doing incremental backups. How did you create this backup?

1 Like

Hi,

This seems to be related/caused by https://bugs.mysql.com/bug.php?id=98501

Do you have partitions and have you performed any operation on those while taking the backup?

1 Like

Thanks Matthew and Marcelo.

Here are the steps that I perform while taking backups and restoring them:

full backup

xtrabackup --backup --parallel=2 --compress --compress-threads=2 --slave-info --target-dir=/backups/mysql/2020-10-01_10-05-00/

incremental backup

xtrabackup --backup --parallel=2 --compress --compress-threads=2 --slave-info --target-dir=/backups/mysql/2020-10-02_10-05-00/ --incremental --incremental-basedir=/backups/mysql/2020-10-01_10-05-00/

While restoring -

decompress full backup

xtrabackup --decompress --parallel=2 --use-memory=10G --target-dir=/backups/mysql/2020-10-01_10-05-00/

decompress incremental backup

xtrabackup --decompress --parallel=2 --use-memory=10G --target-dir=/backups/mysql/2020-10-02_10-05-00/

prepare the base backup

xtrabackup --prepare --parallel=2 --use-memory=10G --apply-log-only --target-dir=/backups/mysql/2020-10-01_10-05-00/ #getting error in this step

prepare the incremental backup

xtrabackup --prepare --parallel=2 --use-memory=10G --target-dir=/backups/mysql/2020-10-01_10-05-00/ --incremental-dir=/backups/mysql/2020-10-02_10-05-00/

We use partitioned tables and most of our stored procedures also use PARTITION EXCHANGE clause as part of our ETL.

As the link shared by Marcelo points out the issue with exchanging partitions, I tried doing an OPTIMIZE TABLE as well on my staging tables being used as part of partition-exchange but again while restoring I was greeted with the same error.

The backups are taken only after the entire ETL finishes of which the PARTITION EXCHANGE is a part.

1 Like

Hi,
I got the same issue.
Is there any solution on the issue ?

/usr/local/xtrabackup/bin/xtrabackup --prepare --export --target-dir=“/mydata/cvstaging/xtrabackupdbs/cmjcv”

xtrabackup: recognized server arguments: --innodb_checksum_algorithm=crc32 --innodb_log_checksums=1 --innodb_data_file_path=ibdata1:12M:autoextend --innodb_log_files_in_group=2 --innodb_log_file_size=4294967296 --innodb_page_size=16384 --innodb_undo_directory=./ --innodb_undo_tablespaces=2 --server-id=0 --innodb_log_checksums=ON --innodb_redo_log_encrypt=0 --innodb_undo_log_encrypt=0
xtrabackup: recognized client arguments: --prepare=1 --export=1 --target-dir=/mydata/cvstaging/xtrabackupdbs/cmjcv
/usr/local/xtrabackup/bin/xtrabackup version 8.0.14 based on MySQL server 8.0.21 Linux (x86_64) (revision id: 113f3d7)
xtrabackup: auto-enabling --innodb-file-per-table due to the --export option
xtrabackup: cd to /mydata/cvstaging/xtrabackupdbs/cmjcv/
xtrabackup: This target seems to be not prepared yet.
Number of pools: 1
xtrabackup: xtrabackup_logfile detected: size=8388608, start_lsn=(5162890676)
xtrabackup: using the following InnoDB configuration for recovery:
xtrabackup: innodb_data_home_dir = .
xtrabackup: innodb_data_file_path = ibdata1:12M:autoextend
xtrabackup: innodb_log_group_home_dir = .
xtrabackup: innodb_log_files_in_group = 1
xtrabackup: innodb_log_file_size = 8388608
xtrabackup: using the following InnoDB configuration for recovery:
xtrabackup: innodb_data_home_dir = .
xtrabackup: innodb_data_file_path = ibdata1:12M:autoextend
xtrabackup: innodb_log_group_home_dir = .
xtrabackup: innodb_log_files_in_group = 1
xtrabackup: innodb_log_file_size = 8388608
xtrabackup: Starting InnoDB instance for recovery.
xtrabackup: Using 104857600 bytes for buffer pool (set by --use-memory parameter)
PUNCH HOLE support available
Mutexes and rw_locks use GCC atomic builtins
Uses event mutexes
GCC builtin __atomic_thread_fence() is used for memory barrier
Compressed tables use zlib 1.2.3
Number of pools: 1
Using CPU crc32 instructions
Directories to scan ‘./’
Scanning ‘./’
Completed space ID check of 18 files.
Initializing buffer pool, total size = 128.000000M, instances = 1, chunk size =128.000000M
Completed initialization of buffer pool
page_cleaner coordinator priority: -20
page_cleaner worker priority: -20
page_cleaner worker priority: -20
page_cleaner worker priority: -20
The log sequence number 5148227819 in the system tablespace does not match the log sequence number 5162890676 in the ib_logfiles!
Database was not shutdown normally!
Starting crash recovery.
Starting to parse redo log at lsn = 5162890311, whereas checkpoint_lsn = 5162890676
Doing recovery: scanned up to log sequence number 5162890802
Log background threads are being started…
Applying a batch of 1 redo log records …
100%
Apply batch completed!
Using undo tablespace ‘./undo_001’.
Using undo tablespace ‘./undo_002’.
Opened 2 existing undo tablespaces.
GTID recovery trx_no: 27195
Creating shared tablespace for temporary tables
Setting file ‘./ibtmp1’ size to 12 MB. Physically writing the file full; Please wait …
File ‘./ibtmp1’ size is now 12 MB.
Scanning temp tablespace dir:‘./#innodb_temp/’
Created 128 and tracked 128 new rollback segment(s) in the temporary tablespace. 128 are now active.
8.0.21 started; log sequence number 5162890802
Allocated tablespace ID 246 for cmj/s151_bn_read_status#p#p12, old maximum was 0
InnoDB: Assertion failure: dict0dict.cc:1216:table2 == nullptr
InnoDB: thread 140672482474240InnoDB: We intentionally generate a memory trap.
InnoDB: Submit a detailed bug report to https://jira.percona.com/projects/PXB.
InnoDB: If you get repeated assertion failures or crashes, even
InnoDB: immediately after the mysqld startup, there may be
InnoDB: corruption in the InnoDB tablespace. Please refer to
InnoDB:
InnoDB: about forcing recovery.
08:16:57 UTC - mysqld got signal 6 ;
Most likely, you have hit a bug, but this error can also be caused by malfunctioning hardware.
Thread pointer: 0x51aa7f0
Attempting backtrace. You can use the following information to find out
where mysqld died. If you see no messages after this, something went
terribly wrong…
stack_bottom = 7fff8a43dc38 thread_stack 0x46000
/usr/local/xtrabackup/bin/xtrabackup(my_print_stacktrace(unsigned char const*, unsigned long)+0x2e) [0x2310b9e]
/usr/local/xtrabackup/bin/xtrabackup(handle_fatal_signal+0x2f3) [0x11bb313]
/lib64/libpthread.so.0(+0xf5d0) [0x7ff0dd1855d0]
/lib64/libc.so.6(gsignal+0x37) [0x7ff0db061207]
/lib64/libc.so.6(abort+0x148) [0x7ff0db0628f8]
/usr/local/xtrabackup/bin/xtrabackup(ut_dbg_assertion_failed(char const*, char const*, unsigned long)+0xb2) [0x161bf32]
/usr/local/xtrabackup/bin/xtrabackup(dict_table_add_to_cache(dict_table_t*, unsigned long, mem_block_info_t*)+0x11c) [0x134479c]
/usr/local/xtrabackup/bin/xtrabackup(dd_table_create_on_dd_obj(dd::Table const*, dd::Partition const*, std::basic_string<char, std::char_traits, Stateless_allocator<char, dd::String_type_alloc, My_free_functor> > const*, bool, unsigned int)+0x1980) [0x1369530]
/usr/local/xtrabackup/bin/xtrabackup() [0x136afcf]
/usr/local/xtrabackup/bin/xtrabackup(dd_table_load_on_dd_obj(dd::cache::Dictionary_client*, unsigned int, dd::Table const&, dict_table_t*&, THD*, std::basic_string<char, std::char_traits, Stateless_allocator<char, dd::String_type_alloc, My_free_functor> > const*, bool)+0x53) [0x136b043]
/usr/local/xtrabackup/bin/xtrabackup(dict_load_tables_from_space_id(unsigned int, THD*, trx_t*)+0x4db) [0xccb6fb]
/usr/local/xtrabackup/bin/xtrabackup() [0xccbc6e]
/usr/local/xtrabackup/bin/xtrabackup() [0xccce00]
/usr/local/xtrabackup/bin/xtrabackup(main+0xca9) [0xc8ca99]
/lib64/libc.so.6(__libc_start_main+0xf5) [0x7ff0db04d3d5]
/usr/local/xtrabackup/bin/xtrabackup() [0xcb9939]

Trying to get some variables.
Some pointers may be invalid and cause the dump to abort.
Query (0): Connection ID (thread ID): 0
Status: NOT_KILLED

1 Like