Percona backup to create secondary mysql 5.7 instance now has many duplicate delete and write errors during replication

Hi,

EDIT: AH this may be my issue:

Make sure that you are getting your replication coordinates from xtrabackup_slave_info (points to the master of the slave you backed up) and not xtrabackup_binlog_info (points to the slave itself that you backed up).

I’m using percona xtrabackup 2.4 to migrate my mysql 5.7 db to a secondary server for replication.
The backup appears to restore correctly, but when replication is set up I end up with these two types of error repeatedly as it tries to catch up:

Could not execute Write_rows event on table db.table; Duplicate entry 'content' for key 'column.index', Error_code: 1062; handler error HA_ERR_FOUND_DUPP_KEY;

and

Could not execute Delete_rows event on table db.table; Can't find record in 'notifications', Error_code: 1032; handler error HA_ERR_KEY_NOT_FOUND;

In both cases it’s trying to modify a row to its existing state, ie, write a row that already exists or delete a row that is already deleted.

This is the command I’m using for the backup:

xtrabackup --backup \
    --no-timestamp \
    --target-dir="{{ backup_dir }}" \
    --user="{{ mysql_root_user }}" \
    --host=127.0.0.1 \
    --password="{{ secrets.mysql_root_password.value }}"

I use the binlog details from the xtrabackup_binlog_info file in the backup to inform the change master command.

With the same DB this mysqldump command functions fine and results in clean replication:

mysqldump --all-databases --flush-logs --master-data --routines --single-transaction --triggers \
    -u "{{ mysql_root_user }}" \
    -h "127.0.0.1" \
    -p"{{ secrets.mysql_root_password.value }}" \
    > "{{ backup_dir }}/backup.sql"

Ideally I can figure out how to resolve the replication issue as I’d prefer the non-locking effect of xtrabackup.

I assume I’m just missing a setting in the percona CLI or I need to adjust the binlog details from the ones provided by the backup.

Marking solved due to OP edit link

Turns out I actually had the opposite issue, but working through this did resolve it.

They were dealing with taking a backup of a replica, intending to replicate with the primary, but ending up trying to replicate from the replica instead.

I’m taking a backup of a primary (that was previously a replication), so I accidentally took the previous replication info (pointing me toward the binlog details of the previous master db).

So I actually wanted to use xtrabackup_binlog_info instead of xtrabackup_slave_info.