Performance of restoring a single table for large tables

Hi all,

I followed the guide here: https://www.percona.com/blog/2020/04/10/percona-xtrabackup-backup-and-restore-of-a-single-table-or-database/ to restore a single table from a previously created and prepared backup using xtrabackup tool. The restore step basically involves 3 steps:

  1. Discard table space.
  2. Copy files on disk.
  3. Import table space.

The steps 1 and 2 are reasonably quick and do not vary with the size of the table. However, the third step is increasing linearly with data size, which is not surprising. But the latency is still too high for our use case. To restore a single table with a primary key and 10K rows, for example, it took about 1.42 seconds.

At this rate, for a table with 100K rows, it is taking ~10 seconds which is beyond our tolerance limits. In our business, we expect to perform very frequent restore operations even for large tables. So, I’m wondering if there are any tips to speed up this restore operation?

I initially tried without primary key and it was worse (2.6 seconds for 10K rows) and it improved by twice, after adding primary key.

Note that my table also has many other secondary indexes as well.

Thank you,
Venkat.

1 Like

Percona XtraBackup was not designed for the frequent restoring of single tables.
You may try mydumper or mysqlpump to see if you can get a better experience with these tools.
But in general I would advise to reconsider the setup where you rely on the frequent recovering of single tables - this is not a normal operational mode for MySQL

2 Likes