So my question is,
Can xtrabackup compress files on the fly? We have a huge database and we’re switching away from innodbbackup so it would be really nice.
Also, does it backup the slave binlogs ? We don’t want to have to reset the replication after every restore.
You can use the streaming option for compression. You can stream to tar using --compress-program=pigz for example and get multiple core compression if you have resources for it.
for example (I didn’t look up the command line args, they might be somewhat different, you should get the gist):
innobackupex-1.5.1 /tmp/backup --stream=tar4ibd --slave-info | pigz > backup.tar.gz
This won’t back up the slave binary logs or relay logs. They would likely change during the copy, and there is no locking mechanism for these files to prevent that from happening.
Instead, you can make your script automatically issue a CHANGE MASTER TO command based on the xtrabackup_slave_info (or whatever) file it produces.
Thanks for the reply.
I just downloaded xtrabackup, and I’m testing a hotbackup right now on a 220GB innodb database that runs as master.
perhaps backing up the slave might be a better idea?
We want to probably do nightly backups so changing the replica setup on either server before each backup might not be a good idea.
Basically after a restore of the master from a disaster, I would have to do a binary file copy (considering the slave is hosed also for some reason) to slave to match it’s position? (dump and restore would take waay to long).