Xtrabackup replication recepe - missing the bin log file


I’m trying to set up a somple master-slave replication with innobackupex according to recepe:


All goes fine, except in when I get to step 5:

Can you share the directory listing of the backup directory after the prepare phase ( after running --apply-log ), or do you have the tool’s log output from when you ran the backup?


thanks for the reply.

this is the backup’s base dir after --apply-log:

drwxr-x— 10 root root 4096 May 30 14:12 ./
drwxr-x— 3 root root 4096 May 29 20:54 …/
-rw-r----- 1 root root 427 May 29 20:45 backup-my.cnf
drwxr-x— 2 root root 4096 May 29 20:31 bandwidth/
-rw-r----- 1 root root 2006746 May 29 20:45 ib_buffer_pool
-rw-r----- 1 root root 17326669824 May 30 14:12 ibdata1
-rw-r----- 1 root root 25769803776 May 30 14:12 ib_logfile0
-rw-r----- 1 root root 25769803776 May 30 14:11 ib_logfile1
-rw-r----- 1 root root 12582912 May 30 14:12 ibtmp1
drwxr-x— 2 root root 4096 May 29 20:31 lcn/
drwxr-x— 2 root root 4096 May 29 20:31 mysql/
drwxr-x— 2 root root 4096 May 29 20:31 performance_schema/
drwxr-x— 2 root root 4096 May 29 20:31 phpmyadmin/
drwxr-x— 2 root root 4096 May 29 20:31 research_data/
drwxr-x— 2 root root 12288 May 29 20:45 sys/
drwxr-x— 2 root root 16384 May 29 20:45 vt_api/
-rw-r----- 1 root root 125 May 30 13:58 xtrabackup_checkpoints
-rw-r----- 1 root root 438 May 29 20:45 xtrabackup_info
-rw-r----- 1 root root 311558144 May 30 13:58 xtrabackup_logfile

I didn’t redirect the outputs of innobackupex to a file this time, but it finished without an error, I can tell you as much. If you need it, I can make another backup (takes a little time, as it’s large).

All the best,

Can you share all innobackupex commands used to backup then prepare (and copy-back?). Also what xtrabackup --version are you using?

Sure, here goes:

[root@vt-api ~]# xtrabackup --version
xtrabackup version 2.4.7 based on MySQL server 5.7.13 Linux (x86_64) (revision id: 6f7a799)

innobackupex --user=root --password=something /root/temp/full/
innobackupex --user=root --password=something --apply-log /root/temp/full/2017-05-29_19-19-16/

Prior to innobackupex I was just using xtrabackup, but the same thing happend:

xtrabackup --backup --user=root --password=something --target-dir=/root/temp/full
innobackupex --user=root --password=something --apply-log /root/temp/full/


do you have binary logging enabled on the source database? I would assume that if there was no xtrabackup_binlog_info file only when the backup source was not configured with binary logging enabled.

Binary loging is not enabled. I thought that Xtrabackup would create that instead, whatever it needs (not clarified in that document, actually).


It was written there actually:

TheMaster A system with a MySQL-based server installed, configured and running. This system will be called TheMaster, as it is where your data is stored and the one to be replicated. We will assume the following about this system:
[]the MySQL server is able to communicate with others by the standard TCP/IP port;
]the SSH server is installed and configured;
[]you have a user account in the system with the appropriate permissions;
]you have a MySQL’s user account with appropriate privileges.
[*]server has binlogs enabled and server-id set up to 1.

Thanks, I’ve enabled bin-log-ing and set the server id to be 1, as for a replication scenario. Now i got this during running innobackupex:

170606 16:57:31 >> log scanned up to (1352647906684)
InnoDB: Last flushed lsn: 1352646988898 load_index lsn 1352647910884
[FATAL] InnoDB: An optimized(without redo logging) DDLoperation has been performed. All modified pages may not have been flushed to the disk yet.
PXB will not be able take a consistent backup. Retry the backup operation
2017-06-06 16:57:32 0x7f5a21d9f700 InnoDB: Assertion failure in thread 140025091716864 in file ut0ut.cc line 916
InnoDB: We intentionally generate a memory trap.
InnoDB: Submit a detailed bug report to http://bugs.mysql.com.
InnoDB: If you get repeated assertion failures or crashes, even
InnoDB: immediately after the mysqld startup, there may be
InnoDB: corruption in the InnoDB tablespace. Please refer to
InnoDB: http://dev.mysql.com/doc/refman/5.7/en/forcing-innodb-recovery.html
InnoDB: about forcing recovery.
14:57:32 UTC - xtrabackup got signal 6 ;
This could be because you hit a bug or data is corrupted.
This error can also be caused by malfunctioning hardware.
Attempting to collect some information that could help diagnose the problem.
As this is a crash and something is definitely wrong, the information
collection process might fail.

Thread pointer: 0x0
Attempting backtrace. You can use the following information to find out
where mysqld died. If you see no messages after this, something went
terribly wrong…
stack_bottom = 0 thread_stack 0x10000

Please report a bug at https://bugs.launchpad.net/percona-xtrabackup

I don;t think that the table is corrupted (although it’s an enormous database - cca 50Gb), because the backup (and the db itself) works well when doing it without binlogs

[FATAL] InnoDB: An optimized(without redo logging) DDLoperation has been performed. All modified pages may not have been flushed to the disk yet.

It looks like a DDL (ALTER TABLE) was executed at the time when backup was in progress. we highly recommend not to run any DDLs when backups are in progress.

Have a similar question, I have master -> Slave setup and have taken the master DB backup using innobackupex.
I get all the data and the xtrbabackup_checkpoint values created as mentioned in https://www.percona.com/doc/percona-xtrabackup/LATEST/backup_scenarios/incremental_backup.html with an exception of
recover_binlog_info = 0 Should it copy the last bin-log file the slaves are referring so that once the master is brought back slave can see the bin-log file and start replicating?
If not we need to rebuild all the slaves again.

Subodh the recover_binlog_info option based on source code relates to whether xtrabackup_binlog_info file will be created on recovery.