Backup a cluster - beginner questions

Hey everyone,

so we have a production-ready xtraDB Cluster setup and want now to plan how to create and restore backups of it.

Actually the setup is a 3 node cluster loadbalanced by a proxysql server.

As i can only find a link to the xtraBackup Docs when searching within the cluster docs, we are just want to ask:

  • What is the preferred way to setup backup for the cluster? Should it be a own node or should xtrabackup run more likely on the cluster nodes?

If you would be so kind as to give us a few tips, that would be great.

Best regards

1 Like

Hello @Zody,
Percona Xtrabackup (PXB) is the preferred way to handle backups. You can use PXB on any node in the cluster to perform backups. Ideally you would use PXB on one of the reader nodes, not the current writer node.

1 Like

Absolutely - got my script working to create the backup - it uses a SAMBA/CIFS Share as target-dir. It’s mounted this way in the fstab:

//idonttellyou.your-storagebox.de/idonttellyou /mnt/backup       cifs    iocharset=utf8,rw,credentials=/etc/backup-credentials.txt,uid=backupuser,gid=backupuser,file_mode=0660,dir_mode=0770,vers=3.1.1 0       0

But when trying to prepare the backup the following error occurs:

2025-04-01T09:57:04.089902-00:00 0 [ERROR] [MY-012574] [InnoDB] Unable to lock ./#innodb_redo/#ib_redo0 error: 13
2025-04-01T09:57:04.089977-00:00 0 [Note] [MY-012575] [InnoDB] Check that you do not already have another mysqld process using the same InnoDB data or log files.
2025-04-01T09:57:04.090020-00:00 0 [Note] [MY-012894] [InnoDB] Unable to open './#innodb_redo/#ib_redo0' (error: 11).
2025-04-01T09:57:04.090066-00:00 0 [ERROR] [MY-012930] [InnoDB] Plugin initialization aborted with error Cannot open a file.
2025-04-01T09:57:04.168315-00:00 0 [ERROR] [MY-011825] [Xtrabackup] innodb_init(): Error occured

which leads me to this article:

Which leads to:

We will have to investigate if there is anything we can do here, for now, please prepare the backup on your local filesystem

Am I just missing a mount option or is the aforementioned case with the double renaming still the case?

Would love to do the prepare within the SAMBA/CIFS Share, cause otherwise we would need to mount another SSD Volume with a large amount of Space just to have a temp dir to create and prepare the backup, cause the local filesystem is very tiny (for example /var/lib/mysql is also just another SSD volume mounted)

//Edit
Versions are:

percona-release/unknown,now 1.0-29.generic all [installed,upgradable to: 1.0-30.generic]
percona-telemetry-agent/stable,now 1.0.2-2.bookworm amd64 [installed,upgradable to: 1.0.3-4.bookworm]
percona-xtrabackup-80/unknown,now 8.0.35-32-1.bookworm amd64 [installed]
percona-xtradb-cluster-client/now 1:8.0.37-29-1.bookworm amd64 [installed,local]
percona-xtradb-cluster-common/now 1:8.0.37-29-1.bookworm amd64 [installed,local]
percona-xtradb-cluster-server/now 1:8.0.37-29-1.bookworm amd64 [installed,local]
percona-xtradb-cluster/now 1:8.0.37-29-1.bookworm amd64 [installed,local]

To create and prepare the backup, i’m having a backup script running from a remote server which checks which node he will use (checking that it’s in Synced state). Actually we dont have a read only node, thats why the backup and prepare happens on a read/write node.

Here are the commands from my script which run:

backup:

ssh $SSH_OPTIONS "$SSH_USER@$backup_node" "
    mkdir -p $BACKUP_PATH &&
    xtrabackup --defaults-extra-file=$MYSQL_CREDENTIALS --backup --target-dir=$BACKUP_PATH
" > >(tee -a /tmp/backup_$backup_node.log) 2>&1

prepare:

ssh $SSH_OPTIONS "$SSH_USER@$backup_node" "
    mkdir -p $BACKUP_PATH &&
    xtrabackup --defaults-extra-file=$MYSQL_CREDENTIALS --prepare --target-dir=$BACKUP_PATH
" > >(tee -a /tmp/backup_$backup_node.log) 2>&1

the SSH_USER we are using here is a simple user which we created as follows:

useradd -d "/home/backupuser" -m -s /bin/bash -p 'IDONTTELLYOU' backupuser
usermod -aG mysql backupuser

The mysql backup_user we use got created as follows:

CREATE USER 'backup_user'@'%' IDENTIFIED BY 'IDONTTELLYOU';

GRANT RELOAD, PROCESS, LOCK TABLES, REPLICATION CLIENT ON *.* TO 'backup_user'@'%';

GRANT SELECT ON performance_schema.global_status TO 'backup_user'@'%';
GRANT SELECT ON performance_schema.replication_group_members TO 'backup_user'@'%';
GRANT SELECT ON performance_schema.keyring_component_status TO 'backup_user'@'%';
GRANT SELECT ON performance_schema.log_status TO 'backup_user'@'%';

GRANT BACKUP_ADMIN ON *.* TO 'backup_user'@'%';

FLUSH PRIVILEGES;

Try doing the prepare on the Samba server itself, instead of through samba. This way, you are doing the prepare locally.

unfortunately it is a storagebox which is just mounted, we cant access the samba server itself :frowning:

You will need to do some research on different samba mount options regarding file locks. Samba isn’t something we deal with, and are not experts at.

Okay then i would more likely bring a read only node into the game which handles the backups and that one will have enough local space to handle the backup.

But then i would have 3 read/write and 1 read-only. Does that brings splitbrain issues? I wouldnt use the read-only in our proxysql to get requests loadbalanced.

Why not just add more local space to any of the existing nodes? If you’re going to need more space on a new node, why not just instead add that space to an existing node?

Curious, why do you want to do the prepare at this stage? Typically, you do the prepare as part of the restore process, not the backup process. This would greatly simplify things if you just moved that part of the process to restore action.