Need Help in taking incremental backup of my managed sql instance on GCP

Hello everybody,

I am not clear on the procedure for implementing daily incremental backups for a production-managed SQL instance on GCP with zero impact on operations (if possible).


Background & Approach

I plan to deploy a Virtual Machine on the cloud that will run the Percona XtraBackup toolkit. This VM will:

  • Perform daily incremental backups
  • Dump the backup files into a Google Cloud Storage (GCS) bucket

After the backups are stored in GCS, I will copy them to my local servers for permanent storage.


Questions I Need Help With

  1. Impact on the SQL Instance:
  • Will my SQL instance experience freezing or locking of database operations during the backup process?
  1. Incremental Backup Storage Concerns:
  • Given that my instance size is over 4TB, I’m concerned about storage overhead.
  • Is it practical to take incremental backups without keeping a full backup on the machine every time?
  • Do I need to keep previous backup files as reference points for generating the incremental backup, or is there a way to store less data on GCS?
  1. Restoration Process:
  • How can the backups be directly restored to a Cloud SQL instance on GCP?
  • Are there any best practices or tools you would recommend for a smooth restore?

Any insights, best practices, or experiences from those who have implemented a similar strategy would be highly appreciated.

Thanks in advance for your help!

Hello @Nilesh_Bhardwaj

No. Percona Xtrabackup (PXB) is a non-blocking, hot backup solution for MySQL.

Yes. You will need to add additional parameters such as --extra-lsndir= since the full/previous incremental will not be kept locally.

See previous. It’s just a text file (<10KB) with the backup metadata that needs to be kept locally.

Well, see, now we have a problem. Up until now, you did not say “Cloud SQL instance”, which is 100% different from “SQL instance on GCP” (implying a GCP instance running MySQL that you will manage).

Please clarify which because your ability to use PXB depends on it. PXB copies the raw .ibd files. With Google Cloud MySQL, you do not have any direct access to the disk, thus it will be impossible to use PXB.

With GCMySQL, you have three options: Use native GCP disk snapshots (Create and manage on-demand and automatic backups  |  Cloud SQL for MySQL  |  Google Cloud) which support automated point-in-time-recovery, or perform logical backups using mydumper. Downside to logical backup is that you will loose the incremental ability. The third option is to run a delayed replica. By intentionally delaying a replica by 6 hours (for example), if disaster happens with the data, you have a 6hr window to get to the replica, break replication, save the about-to-be-deleted data, and quickly recover.

Thanks for the answers.

i dont know if i’ll be able to use the Xtrabackup to directly take backup of my gcp sql instance.
can you point towards documents regarding the process, as i cannot find any.

I know you won’t, because you will not have access to the disk. PXB requires local disk access, which is not possible with any cloud-managed SQL.

If you are using a GCP instance, that you manage, that you install MySQL onto, then yes, you can use PXB. If Cloud SQL, then No, cannot use PXB.

I get it now.
Have you ever been in situation where a GCP cloud SQL instance requires a backup?
can you suggest anything for my specific scenario. How can i take backup of a instance that is over 4tb.
As i researched about the mysqldump i got to know it cant do 4tb at once.

it would be very helpful.

mydumper works for 4TB systems, or use the native backup capabilities.