Best Pratice to backup more than 3000 Db's with full and incremental Backups

Hi All,

I need to Backup more than 3000 small MySQL Databases on one MySQL Server (one full and every hour incremental per day).
Have anybody expirience with Database Backups like that?

Thanx for help and Answers

Martin

A few questions to start:

  1. Are all of the databases in a single MySQL instance?

  2. Do you have a slave DB server that has a copy of all of the databases?

  3. Is it important to be able to restore a specific database (i.e. 1 of the 3000) quickly (at the expense of simplicity)?

  4. When you say “one full and every hour incremental per day”, do you mean one full backup each day, plus an incremental backup every hour?

  5. When you say the databases are small, what size are we talking? And what’s the transaction volume like?

We’d also like to know the best command flags and options to use. We have over 5,000 databases on a single server with 8 cores and 16GB of memory. We’d like to just run a full database backup twice a day (2pm and 2am). Don’t need to worry about incremental backups yet. We are currently using mysqldump, but it taxes the box when we run it. It takes quite a while, and eats lots of memory that does not seem to be reclaimed from MySQL (Percona).

1.) Yes
2.) No
3.) No, simple is best.
4.) Just full backups twice a day
5.) The average size per database is 1MB, so small.

Hi Justink;

Your backup needs sound pretty straight forward, so that is good. Since all of your databases are in the same MySQL instance, and you do not need incremental backups, then it’s just a matter of getting through a full backup.

I would first suggest that you setup a slave DB server that will replicate all of the databases, or multiple slaves that each replicate a portion of the databases (more complicated). That way you can perform the backup on the slave and not affect the performance of the master database server. Given that mysqldump already taxes the master DB server while taking the backup, you will probably see the same thing with Xtrabackup (though maybe less so, but you would have to test it).

That aside, the main option to help speed up Xtrabackup/innobackupex is the --parallel option, which works best if you have innodb_file_per_table enabled. The other option is using --stream=xbstream if you want to also compress the backup at the same time you are taking it. Note however that speeding up the backup means a larger performance impact, so if you perform this on the master you probably want to actually slow the backup down instead to not tax the box (i.e. set --parellel=1, and look at using --throttle to limit disk IOP usage).

If anything, just setting up a slave and doing your current backup process there would help you out a lot. Having just the master as a single point of failure is not ideal for a handful of reasons, so I would definitely start with a slave first, which will then right away give you a backup copy of the data (even better if you set up a separate delayed replication slave for disaster recovery purposes). If an entirely separate slave DB server is not possible, even putting some extra disks in your current server to where you could backup to a separate RAID10 partition would be better than nothing (for both performance and redundancy purposes). =)

-Scott

Hi Scott.

Thanks for the reply. We are actually having trouble running innobackupex using SSL. I’ve created thread on this, at: [URL=“http://www.percona.com/forums/questions-discussions/percona-xtrabackup/15596-how-to-backup-using-ssl”]http://www.percona.com/forums/questi...ckup-using-ssl[/URL]

innobackupex --stream=tar ./ | gzip - > ~/mysql.backup.tar.gz

Fails with:

Error: Failed to connect to MySQL server: DBI connect(';mysql_read_default_group=xtrabackup','', ...) failed: SSL connection error at /usr/bin/innobackupex line 2949

We don’t use innodb_file_per_table because this creates lots of small files, which increase the I/O load and number of open file descriptors.

Thanks for the help.