Failing @ Percona

Good afternoon,

I’m fairly new to using Percona’s XtraBackup. Our databases have grown to a size where mysqldump is taking up to 12 hours to backup an entire server, and causes locking issues. This obviously is no longer acceptable.

What I like about mysqldump is that I can directly backup and restore individual db’s. My understanding (from a face to face conversation with the CEO of Percona) is that XtraBackup is not so easy in this aspect. There was some talk about doing a backup and then restoring the data elsewhere and copying back only what we need. I’m having a heck of a time finding good documentation for entry level Percona skills.

I feel as though the Percona instructions (PDF) were written for someone with years of experience and I am spending a lot of time doing trial and error on a test VM.

Here is what I would like to do:

  1. Backup all the databases on a server, and if something becomes corrupt, I want to be able to restore a single database or a table, etc. I’m not a fan of having to restore the entire thing when only portions are needed. Does this make sense? Thus far the copy-back seems to require EVERYTHING.

  2. I would like to be able to exclude a database in my backup. We have a tmp db that is constantly changing as temp data and tables are written. This causes a failure on the full backup because XtraBackup will attempt to write files it has already written - I’m guessing because the tables were deleted and recreated during the backup process. I get an error like:

xtrabackup_56: Can’t create/write to file ‘/media/Synology/2015-05-06_14-36-47/tmp/wrk_ReportStores_StoreList.ibd.pmap’ (Errcode: 17 - File exists)
innobackupex: Error: The xtrabackup child process has died at /usr/bin/innobackupex line 2622.

If I scroll up in the SSH screen I see the file was created maybe 5 minutes earlier.

EXAMPLE:

[01] Copying ./tmp/wrk_reportstores_storelist.ibd to /media/Synology/2015-05-06_14-36-47/tmp/wrk_reportstores_storelist.ibd
[01] …done

This aborts the backup all together. Is there an exclude option?

  1. The apply-log switch… do you do that AFTER you create the backup of BEFORE you do a restore? I want to make sure we aren’t writing to the production DB. I thought it was taking the log it captured and applying it to the backup files, but I want to be sure.

Here is the command I am running for the full backup:

sudo innobackupex --user=backupuser --password=backuppassword --compact /media/Synology/

Any help would be greatly appreciated! :slight_smile:

Let me try to answer but others may have a better suggestion, so here goes:

1] You can run “–apply-log --export” during the prepare phase so as to create those .exp and .cfg files needed when importing the individual tables. You can read further here - [url]Percona XtraBackup. This is going to be a tedious task in case you plan on importing a lot of tables.

2] There is no exclude option but i think you may be able to use either --databases or --tables-file options, of course you should not include the database you want NOT to be backed up in the process. For further reading - [url]Percona XtraBackup

3] There is no specific rule on whether you run --apply-log after the backup was created or before you want the backup restored. BUT, to make sure that your backup is valid, you can run --apply-log to check it. To understand what --apply-log does, please refer to [url]Percona XtraBackup, since --apply-log implies running xtrabackup --prepare twice.

Thanks for getting back to me jrivera.

Let’s say I have a server with 4 databases, 3 of them I want backed up. The database names are com, edw, trx, and tmp. I do not want tmp to be backed up. What would be the easiest way to do this? Let’s assume that each of the ones I want to backup has more than 100 tables, so listing the tables individually would be an epic waste of time.

I’m looking for simplicity where possible. Appreciate any assistance.

Get the list of schema.tables first and write it to a file:
mysql -uroot -p -Ne “select concat(table_schema,‘.’,table_name) from information_schema.tables where table_schema not in (‘mysql’,‘information_schema’,‘performance_schema’,‘tmp’)” > filtered.txt

then run innobackupex --tables-file=/path/to/filtered.txt [other options here]

see if this works!