Can I backup remote databases from my local server

Hi,
I am using mysqldump so far for taking daily backups of my Production databases. It takes around 6 to 7 hours for 210 GB of data and thus looking for other backup methods.

I have one single util server where I run mysqldump for all my production servers with one single script. I store my backup files in the same server. It is really easy to maintain.

Is that possible with Xtrabackup? Do I have to install and run xtrabackup in all my production servers? I can see there are options to transfer the backup files to any remote server. But I need to know whether I can run xtrabackup from a remote server connecting all my database servers as in mysqldump.

What are the options for it?

Thanks,

1 Like

Xtrabackup needs access to the filesystem whereas mysqldump only connects to the MySQL server. I haven’t seen any attempts to use NFS to access files on the db-server.

1 Like

I have used XtraBackup to back up remote servers to a local server. In fact I use this method a lot when I want to create a slave in a different geographic location. First I would suggest reading the XtraBackup Manual so you understand the differences between mysqldump and XtraBackup as they are very different.

If you determine you can use XtraBackup in your situation then I can share what I have done in the past in hopes that you can use part of it in your situation.

In this example lets says we have two locations or data centers (DC1 and DC2) and that you have a MySQL server (A) at DC1 that you want to back up to DC2.

Because you want to back up a remote server to a local server, I will assume you may be connected over a VPN (which I have used this method for) or other network link.

I suggest using pigz for compression over gzip as pigz can use multiple processors. So if possible, install pigz on A at DC1 ( google pigz )

We are also going to make use of the “stream” option in XtraBackup and netcat (nc)

Please read the XtraBackup documentation to make sure you select the correct options for your situation.

At DC1 on A I run:
innobackupex --user=username --password=password --stream=tar ./ | pigz | nc -l 1234

At DC2 on the server I want to receive the back up at I run in the directory I want to store the back up:
nc 1234 | gunzip | tar ixvf - 2> xtrabackup.log

Note, this will store your entire MySQL data directory uncompressed at DC2 on your back up server. This is useful for creating a slave at DC2 and there are great directions on this in teh Xtrabackup Manual.

I have done this with 600 GB databases in 2-3 hours over a VPN and much quicker on a 1GB network ( where A server had 16 cores).

You could also just send the file like:

innobackupex --user=username --password=password --stream=tar ./ | pigz | ssh user@dc2_server “cat - > /backupdirectory/mysql_backup.tar.gz”

There are other great options you may need such as --slave-info and --safe-slave-backup and if the locking at the end of the back up is an issue you can use the --no-lock option and then use pt-table-checksum and pt-table-sync after you create your slave to bring it in line with your master. This is a bit more advanced, and while I have done it several times, I used Percona Support to do it! ( worth the investment trust me!)

1 Like

Thank you for your replies. Answered my question.

And Steve, thanks for the tips. I already have tried the option you have explained to run the backup and move it to a remote server. Was using bzip2. I am new to Xtrabackup, will keep exploring the other options and get back soon with more questions.

1 Like

I have 2tb of mysql data to be migrated as quick as possible over same network on another server. Can you tell if I use dedicated CPU and RAM, will it be faster? If yes, then how much should I dedicatedly use?

1 Like

Well I have being doing this since long.
But as Percona releases newer versions the commands change a bit too.
And add to that CentOS/RHEL, there has to be changes too.
For mysql 8.0.2* remote backup that I use looks like this.

ssh target-mysql-host -T << EOF
xtrabackup --user=username --password=xxxxxx --backup=1 --target-dir=/var/lib/mysql_backup/full --extra-lsndir=/var/lib/mysql_backup/XTRALSNR --stream=xbstream
EOF > " cat - | xbstream -x -C /backup/mysql/backups/backup.xbstream"

Pre-requisites:
on mysql machines:

  1. Directories
    mkdir -p /var/lib/mysql_backup/full
    mkdir -p /var/lib/mysql_backup/XTRALSNR
    (the XTRALSNR dir will hold files xtrabackup_checkpoints xtrabackup_info.
    These files will be needed to perform incremental backups later)
  2. Passwordless ssh to and fro backup host for mysql user
  3. change /etc/passwd and allow mysql user to have bash

On Backup Host

  1. Directories:
    mkdir -p /backup/mysql/backups
    chown -R mysql:mysql /backup/mysql/backups
    This command I execute from the backup host as mysql user.
    Both mysql and backup hosts are RHEL 8.6
1 Like