Backup MySQL using LVM snapshot


We are running 3 nodes (VM) cluster, 1 Master and 2 Replicas, replicas are enabled with multi-threaded NON-GTID replication. slave_parallel_workers = 8 and slave_parallel_type = LOGICAL_CLOCK. We take a backup from one of the replicas using the LVM snapshot feature.

Here are the steps that we are performing to take backup.

  • Connect to MySQL on the replica.


  • Record SHOW SLAVE STATUS\G command output.

  • Take lv snapshot: lvcreate --size 1G --snapshot --name n_snapshot /dev/vgroot/lv_mysqldata (/dev/vgroot/lv_mysqldata is LVM where MySQL datadir(/var/lib/mysql) is mounted)


  • Upload snapshot to remote storage;

Backup restoration process in case we want to create another replica.

  • Restore the latest backup to VM

  • Start MySQL once the restore is completed

  • Configure replication using Relay_Master_Log_File and Exec_Master_Log_Pos from the SHOW SLAVE STATUS\G command recorded during backup process.

The issue that we are facing is whenever we restore the backup and setup replication, replication is breaking with an error: Error_code: 1062 Duplicate entry

MySQL configuration:

  binlog_format = "ROW"
  relay_log_info_repository = "TABLE"
  relay_log_recovery = "ON"
  master_info_repository = "TABLE"
  log_slave_updates = "ON"
  skip_slave_start = "OFF"
  slave_parallel_type = "LOGICAL_CLOCK"
  slave_parallel_workers = "8"
  slave_preserve_commit_order = "1"

Is there any process/command in backup or restore that we are missing, because of which we are getting Duplicate entry issue.

1 Like

Do you only get this issue once or twice? Or is this issue ongoing? If you only get this once, that means your ‘SHOW SLAVE STATUS’ is out of sync with what is currently flushed to disk. FLUSH TABLES does not guarantee (and has never guaranteed) that all currently dirty pages are written to disk.

You might try something more like this: On the source, run SHOW BINARY LOGS; and make note of the current binlog file name. Then go to the replica and run STOP SLAVE; then START SLAVE UNTIL MASTER_LOG_FILE = ‘’, MASTER_LOG_POS = 4; Then go back to source and do ‘FLUSH BINARY LOGS’ to rotate to the next binlog manually. This should then cause the replica to stop at a consistent state. Then run on replica, SET GLOBAL innodb_max_dirty_pages_pct=0, give that a few minutes to flush all pages to disk, then you can run FTWRL and take your snapshot.

OR… to simplify all of that, use Percona Xtrabackup. :slight_smile:

1 Like

@matthewb Thanks for your response.

This issue is ongoing. Is this issue because we are using multi-thread replication or it’s not related to that.

This is what we see in the documentation - MySQL :: MySQL 5.7 Reference Manual :: SHOW SLAVE STATUS Statement

This indicates that there might be some transactions committed post Exec_Master_Log_Pos. but not updated the SLAVE STATUS.

Will the below-mentioned steps help in making backup consistent?

  • Connect to MySQL on the replica.
  • Record SHOW SLAVE STATUS\G command output.
  • Take lv snapshot: lvcreate --size 1G --snapshot --name n_snapshot /dev/vgroot/lv_mysqldata (/dev/vgroot/lv_mysqldata is LVM where MySQL datadir(/var/lib/mysql) is mounted)
  • Upload snapshot to remote storage;

We do use xtrabackup and innobackupex for rebuilding replicas. For some of the scenarios,

  1. Cross DC rebuilds
  2. Creating multiple replicas at the same time.
  3. Cloning cluster for a hardware upgrade.

Restoring from remote backup will be faster in our case and also multiple nodes can be rebuilt at the same time, whereas with the xtrabackup method rebuilds will be sequential.

XtraBackup is based on InnoDB‘s crash-recovery functionality. Xtrabackup works by remembering the LSN when it starts copying data files. At the same time, Xtrabackup runs a background process to copy transactional logs. Once it finishes copying files, Xtrabackup issues FLUSH TABLES WITH READ LOCK to prevent further changes to MySQL‘s data and takes a dump of replication coordinates.

This is similar to what we are also doing to take a backup. when the backup is taken, the entire datadir (data + transactional logs) is copied to remote storage.

We do use this durability setting on all nodes in the cluster

 innodb_flush_log_at_trx_commit = "1"
 sync_binlog = "1"
1 Like

Hello, the steps you list should help you get past the issue. However please note that the process you are following may create a snapshot that is not consistent. MySQL might fails to start. Even if you flush the tables, there are many innodb background threads doing work. I suggest you consider a different approach as Matthew said, xtrabackup could be a good choice.

1 Like

Thanks for the suggestion. We will work towards using the Xtrabackup utility for the backup of our MySQL clusters.

We would like to understand how Xtrabackup captures/records the binlog position/GTID set, which is required to configure replication once the backup is restored. Can you point us to a document where it is been captured?

We have the understanding that Xtrabackup remembers LSN and starts copying data files, and at a regular interval it copies transactional log file content to xtrabackup_logfile. we had some more questions, please help us.

  1. What is the frequency at which transactional log file content is copied to xtrabackup_logfile?
  2. At what point does Xtrabackup record binlog position?
  3. Recording binlog position step, does it have any dependency on the performance schema? or positions are captured from the “SHOW SLAVE STATUS” or from mysql.slave_relay_log_info tables.
  4. Does Xtrabackup stop slaves before recording replication coordinates if the backup is taken from the slaves?
1 Like

Here’s a page that I believe answers your questions 1-3: How Percona XtraBackup Works - Percona XtraBackup
For question 4 yes, xtrabackup can do that as well see Taking Backups in Replication Environments - Percona XtraBackup

1 Like