Percona Cluster xtradb-cluster 8.0.27-18.1 backups strategy

Hello,

Looking to take full backups/differential backups and log backups for our percona 3 node clusters. I do not know what the best method is for this and or why you would do a xtrabackup method or innobackupex streamed backup method. I need to be able to do say fulls on sunday, differential daily and then log backups every 15 minutes. Is this possible to do while everything is running all the time? Something I could script in a bash file and cronjob to run it on a schedule.

I do have some examples of doing full backups using xtrabackup and how to use differentials. We have a streamed backup method being used on one of our other clusters but I am not sure why it was setup like this.

innobackupex (what is the diff between this and xtrabackup?) --galera-info --stream=tar ./ | pigz -p 4 > $backup_file

Thanks for any insight.

1 Like

Welcome @jasonfe33.
innobackupex is dead; everything should be xtrabackup.

Is this possible to do while everything is running all the time?

Yes. You can run xtrabackup at any time to take the full and differential backups. In order to accomplish the “log backups” (I’m assuming you are referring to MySQL’s binary logs), you can set up a simple rsync cron job every 15m to sync those files somewhere safe.

I would recommend against tar-streaming as tar can’t support parallel backups and thus requires very large temporary files during the backup process. pigz doesn’t gain much either, so I would stick to the built in --compress (qpress). Here’s my recommendation that we use in our PXC Cluster Tutorial

xtrabackup --backup --parallel 4 --compress --compress-threads 4 --stream xbstream >$backup_file.xbs
2 Likes

This is awesome. I’m glad you said innobackupex is dead as that clears it up right away LOL. To go back on our existing backup that is set, (innobackupex --galera-info --stream=tar ./ | pigz -p 4 > $backup_file) , can I just replace that method with what you mentioned in xtrabackup --backup --parallel 4 --compress --compress-threads 4 --stream xbstream >$backup_file.xbs? is that just doing a full backup ? At least that was my impression.

Which node does it use when doing a backup for the cluster? or we are just assuming data on all nodes is the same , therefore the backup should be what is there in total?

We copy these backup files to a NFS share, I presume that is ok as well?

For configuring an instance/cluster to do full/differential and binary log backups can we just put these lines in a script and a cron job to run these?

Full backup xtrabackup --backup --parallel 4 --compress --compress-threads 4 --stream xbstream >$backup_file.xbs (this will do a full backup?) Also is it best to prepare the backups after they are ran incase of need to restore or is prepare ok to do at the time we need to restore? I don’t know what recommended process would be.

How would I do differential backups? What command and or setup would I need to do this?

Then lastly for the binary log backups, rsync cron job to sync somewhere safe. xtrabackup. which options can I use with xtrabackup for this?

Thank you!

1 Like

Also I have to note I was reading the wrong backup line. That innobackupex was our old backup line pre 8.0 now it’s xtrabackup --backup --galera-info --stream=xbstream | pigz > $backup_file. What is the difference with --parallel 4 vs how we have it setup? We are using --galera-info as well which i’m not sure what that is doing. I’m a bit new to learning how this backup stuff works so forgive me for the redundancy and or lack of info :-).

1 Like

--galera-info is no longer needed. The information is included by default now.
--parallel = 4 allows xtrabackup to copy tables in parallel. Should make your overall backup process go faster (vs single-threaded)

Yes, the command I gave above does a full backup each time.

Which node does it use when doing a backup for the cluster?

Whichever node you run xtrabackup on, that’s the node. Physical backups cannot be done remotely.

We copy these backup files to a NFS share, I presume that is ok as well?

Yes. Store the backups wherever you like.

For configuring an instance/cluster to do full/differential and binary log backups can we just put these lines in a script and a cron job to run these?

That’s extremely simplistic way of saying that, but yes you can. Your script should be more complex that it checks for result of the backup, alerting if necessary. Differentials require the full to be present locally, etc.

https://docs.percona.com/percona-xtrabackup/8.0/backup_scenarios/incremental_backup.html

The only difference between incremental backup and differential is what path you use for --incremental-dir= If the path you provide is to the full backup, then you will create differentials. If the path you provide is to a previous differential or previous incremental, then you’ll make incrementals.

binary log backups, rsync cron job to sync somewhere safe. xtrabackup. which options can I use with xtrabackup for this?

Xtrabackup doesn’t mess with binary logs. Use rsync or plain cp

1 Like