Xtrabackup and Innobackupex Hot Backup solution

Hi guys,

I have created 3 cluster servers, works already over 7 month.
Also every 10 minutes executes “Incremental Backup” (at the beginning of a day creates new FULL and then ONLY Incremental) via innobackupex-1.5.1.
So every time I have Backup of critical data for the last 10 minutes. But how about the rest of the data? Can I have Backup every 1 second (Point-in-time recovery)?

So our database grows up every time, it will have to read whole database in order to find pages that have changed since the last backup. So for really big databases when plain read will take longer than 10 minutes you can’t take backups that often

For example, the last “Incremental Backup” was done at 2014-Apr-22_15-00-01 and then the next “Incremental Backup” will be done in 10 minutes later. Between its period of time was wrong SQL statement (DROP, DELETE, etc) how can I restore data in this situation ?

Taking incremental backups every 10 minutes seems excessive and not scalable, so I would definitely think more about that process to start with. Managing that sounds like a nightmare, and as the database grows the likelihood of those incremental backups starting to overlap because they cannot finish in the 10 minutes seems highly probable.

As for other options, you could use a delayed slave. Meaning you have a slave database that is behind in time, i.e. by say an hour. Then if someone drops a table accidentally, you would be able to go to the delayed slave and get the table that was dropped before the drop table statement is processed. This can be done with pt-slave-delay or with MySQL 5.6 built-in functionality.

[url]http://www.percona.com/doc/percona-toolkit/2.1/pt-slave-delay.html[/url]
[url]https://dev.mysql.com/doc/refman/5.6/en/replication-delayed.html[/url]

Thanks for your reply! But in this situation we lose some date in database anyway. So I could restore droped, deleted - table, database, but what about other data that were inserted into other table, database ?

So as you have told me I could restore data for the last 1 hour (if Slave will lag behind Master) but for the last 1 hour many data will be inserted into “Master node” database and I will lose it.

And if I will set to execute innobackupex-1.5.1 via crontab say every 1 hour it will be the same as - “Delayed Slave”

So I have 3 Percona XtraDB Cluster nodes - 192.168.0.1, 192.168.0.2, 192.168.0.3. All they are connected via 1 Gbit/s Ethernet Switch. What should I do to prevent any data loss (even 10 second of data loss) ?

Please advise us the best solution

And innobackupex-1.5.1 - is it combination of Xtrabackup and Innobackupex (wich allows Backup MYISAM) tables also ?

No matter what you do there is a chance of losing data. But with the delayed slave, you could likely recover from many situations with no data loss. For instance if someone drops a table, you can use mysqldump to export the data from just that table on the delayed slave and load it back onto the master.

Aside from this, the other method as mentioned is to use your binlogs. Then in a disaster scenario you would load your most recent backup (full backup and any incrementals), and then you could replay whatever binlogs you have from after the latest backup to get you up to the most recent data you have.

Aside from this, the other method as mentioned is to use your binlogs. Then in a disaster scenario you would load your most recent backup (full backup and any incrementals), and then you could replay whatever binlogs you have from after the latest backup to get you up to the most recent data you have.

Could you please give me a link or info how could I do this ?

How ofter I should do Backup ?

I have 3 PXC nodes, at the beginning of a day on each of them creates “FULL Backup” and after 1 hour creates “Incremental backup”. My bash script holds FULL and its Incremental Backup for 12 hours. Then creates new FULL Backup (after that - removes previous old FULL and its Incremental for the last 12 hours) and after 1 hour creates Incremental Backup, and so on every day

How often you backup your data is primarily a business decision, with the caveat being there are technical limitations, so what the business wants may not always be feasible for what they are willing to pay for. From what I’ve seen, it is fairly common for people to do a full backup once a week, and then do an incremental backup daily to supplement that.

For info on using binlogs, there are a lot of different sources, so you’ll want to read up on what the binlogs do and how to use them:

[url]https://dev.mysql.com/doc/refman/5.5/en/point-in-time-recovery.html[/url]
[url]Percona XtraBackup
etc.

Hi,
Scott gave you a very good suggestion. You can check how often your binlog switches and decide how often do you need to back it up.

Hi ojoj!
Please advise a good solution