Is it really true that XtraBackup requires the original table definition to be created before it can restore the table files using “IMPORT TABLESPACE”? If so that seems like a big oversight. This functionality should be built into XtraBackup. Imagine this scenario:
DBA has a 500 GB MySQL database on a production server. The server is a standalone server (no replication).
An application or user (or careless DBA) accidentally drops a very important 1 GB table that is the key to the whole application.
DBA tries to restore the table from the XtraBackup backup, but doesn’t have the definition of the table to create the empty table first.
DBA gets fired because the entire 500 GB (the whole server) needs to be restored in order to restore the innodb table dictionary stuff.
Am I missing something here? Seems like as part of the backup process the XtraBackup software should perform some sort of “mysqldump --no-data” on the backed up databases and store that away for later table creation use if needed. Better yet, I think the table definitions can be retrieved from the backup files using the minimal innodb engine that is used to apply the logs during the backup process.
Good point. On the other hand, it is easy to get the table definition from a full backup, and mysqldump --no-data may take some time for a large number of tables.
How can I get the table definition from a full backup?
Also, what happens if my database has hundreds of tables, how can I easily discard each tablespace and then import for each one? Again, it seems like there is a great need for automating some of this into the XtraBackup product. Worth paying for features like that. I guess since it’s open source we can develop it ourselves but everyone needs these features. We copy databases all the time from server to server.
I know this is an old thread, but this problem still exists 2 1/2 years later! Personally I think trying to retrieve the original definition of the tables from the .frm tables or from the data dictionary are pretty bad ideas. That is waaaay too kludgy, and if you have a production outage there’s no way you can be reading through web articles trying to figure out how to do it. So you’d definitely need to script the whole thing out, but if you did that why not just have it in the product in the first place. PLUS what about all the triggers, events, and procs that are associated with the database? I think those would be unrecoverable which means your app is pretty much hosed if you do it that way.
I finally had to write my own post-backup script. As you may know, the backupset name is passed to post-backup script by default in the app. The script simply takes that, goes to the index file and parses out all the database names and iterates through them one by one to produce a DDL-only (–no-data) backup of each database and sticks the resulting .sql files in the same output directory as the backup itself.
edit Oh I forgot to mention that we use Zmanda along with XtraBackup. That’s the piece that passes the backupset name, it has the index file that contains all the names of all the databases that were backed up.