Hi,
I am testing a legacy backup system and am having problems at the prepare step.
The backup is to S3, of mysql 8.0.21, using xtrabackup version 8.0.25-17 based on MySQL server 8.0.25 Linux (x86_64) (revision id: d27028b)
The error I receive is:
# xtrabackup --prepare --apply-log-only --target-dir=test-restore/
xtrabackup: recognized client arguments: --prepare=1 --apply-log-only=1 --target-dir=test-restore/
xtrabackup version 8.0.25-17 based on MySQL server 8.0.25 Linux (x86_64) (revision id: d27028b)
xtrabackup: cd to /root/test-restore/
xtrabackup: This target seems to be not prepared yet.
Number of pools: 1
Operating system error number 2 in a file operation.
The error means the system cannot find the path specified.
xtrabackup: Warning: cannot open ./xtrabackup_logfile. will try to find.
Operating system error number 2 in a file operation.
The error means the system cannot find the path specified.
xtrabackup: Fatal error: cannot find ./xtrabackup_logfile.
The xtrabackup_logfile
is indeed missing:
# ls -l test-restore/xtr*
-rw-r--r--. 1 root root 77 May 3 00:07 test-restore/xtrabackup_binlog_info.00000000000000000000
-rw-r--r--. 1 root root 36 May 3 00:07 test-restore/xtrabackup_binlog_info.00000000000000000001
-rw-r--r--. 1 root root 105 May 7 03:51 test-restore/xtrabackup_checkpoints
-rw-r--r--. 1 root root 161 May 3 00:07 test-restore/xtrabackup_checkpoints.00000000000000000000
-rw-r--r--. 1 root root 36 May 3 00:07 test-restore/xtrabackup_checkpoints.00000000000000000001
-rw-r--r--. 1 root root 734 May 7 03:51 test-restore/xtrabackup_info
-rw-r--r--. 1 root root 783 May 3 00:07 test-restore/xtrabackup_info.00000000000000000000
-rw-r--r--. 1 root root 29 May 3 00:07 test-restore/xtrabackup_info.00000000000000000001
-rw-r--r--. 1 root root 884276 May 3 00:07 test-restore/xtrabackup_logfile.00000000000000000000
-rw-r--r--. 1 root root 32 May 3 00:07 test-restore/xtrabackup_logfile.00000000000000000001
-rw-r--r--. 1 root root 95 May 3 00:07 test-restore/xtrabackup_tablespaces.00000000000000000000
-rw-r--r--. 1 root root 36 May 3 00:07 test-restore/xtrabackup_tablespaces.00000000000000000001
It’s missing from all attempts to make a full backup.
The command that creates this backup on S3 is:
xtrabackup --backup --no-lock --stream=xbstream --target-dir="${DAILY_BACKUP_DIR}" --extra-lsndir="${DAILY_BACKUP_DIR}" \
--keyring-file-data=${KEYRING_NAME} \
--host=${DB_HOST} --port=${DB_PORT} --user=${DB_USER} --password="${DB_PASS}" \
--parallel=${NUM_THREADS} | \
xbcloud put --storage=s3 \
--s3-endpoint="${S3_ENDPOINT}" \
--s3-access-key="${S3_ACCESSKEY}" \
--s3-secret-key="${S3_SECRETKEY}" \
--s3-bucket="${S3_BUCKET}" \
--parallel=${NUM_THREADS} \
"${HOSTNAME}/${BACKUP_NAME}/full"
This is blocking me from preparing a backup. Specifically I am trying to prepare a base (full) backup in order to try applying incremental backups to it. This is happening on the first, base prepare.
Thankyou in advance
Hello @toby5box
Can you check full backup log ? See if it’s completed successfully without any error.
I think you didn’t use xbcloud to download the backup from s3. xbcloud does this splitting of files.
So please use xbcloud get <> | xbstream
and it will merge the files.
You have all the files but only in format that xbcloud and xbstream understands. Please follow the steps from the documentation : Use the xbcloud binary with Amazon S3 - Percona XtraBackup
1 Like
The backups always complete without error.
Here’s a snippet of the log:
May 8 00:06:50 db1-new CROND[3973240]: (root) CMDOUT (250508 00:06:50 xbcloud: successfully uploaded chunk: db1-new/Thu-2025-05-08-00-00-02/full/ib_buffer_pool.00000000000000000000, size: 10405383)
May 8 00:06:51 db1-new CROND[3973240]: (root) CMDOUT (xtrabackup: Transaction log of lsn (1643822304659) to (1643823311018) was copied.)
May 8 00:06:51 db1-new CROND[3973240]: (root) CMDOUT (2025-05-08T00:06:51.419302Z 0 [Note] [MY-010733] [Server] Shutting down plugin 'keyring_file')
May 8 00:06:51 db1-new CROND[3973240]: (root) CMDOUT (2025-05-08T00:06:51.419375Z 0 [Note] [MY-010733] [Server] Shutting down plugin 'daemon_keyring_proxy_plugin')
May 8 00:06:51 db1-new CROND[3973240]: (root) CMDOUT (250508 00:06:51 completed OK!)
May 8 00:06:52 db1-new CROND[3973240]: (root) CMDOUT (250508 00:06:52 xbcloud: Upload completed.)
May 8 00:06:52 db1-new CROND[3973240]: (root) CMDOUT (Completed 105 Bytes/105 Bytes (1.4 KiB/s) with 1 file(s) remaining#015upload: ../opt/x/backup-lns-data/Thu-2025-05-08-00-00-02/xtrabackup_checkpoints to s3://x-database-backups/db1-new/Thu-2025-05-08-00-00-02/full-lsn//opt/x/backup-lns-data/Thu-2025-05-08-00-00-02/xtrabackup_checkpoints)
May 8 00:06:53 db1-new CROND[3973240]: (root) CMDOUT (Completed 723 Bytes/723 Bytes (9.0 KiB/s) with 1 file(s) remaining#015upload: ../opt/x/backup-lns-data/Thu-2025-05-08-00-00-02/xtrabackup_info to s3://x-database-backups/db1-new/Thu-2025-05-08-00-00-02/full-lsn//opt/x/backup-lns-data/Thu-2025-05-08-00-00-02/xtrabackup_info)
May 8 00:06:53 db1-new CROND[3973240]: (root) CMDOUT (All backup operations complete. Total elapsed time: 0:06:51)
Satya
No, I’m using the file based backup, xbcloud, not xbstream.
I believe an xbstream format backup does not have this problem, but that’s a different topic. Before switching to xbstream I want to understand why I am having this problem with xbcloud in the legacy configuration and if it is a genuine bug.
(I’m also re-testing xbstream format right now as I need to be sure that we do in fact have at least one restorable backup. But I still would like to understand the problem with xbcloud.)
Indeed, the xbstream format when streamed to s3 as a single object does contain the xtrabackup_logfile
:
...
performance_schema/binary_log_trans_1681.sdi
performance_schema/tls_channel_stat_1682.sdi
demo/db.opt
mysql-bin.037583
mysql-bin.index
xtrabackup_binlog_info
xtrabackup_logfile
xtrabackup_checkpoints
ib_buffer_pool
backup-my.cnf
xtrabackup_info
xtrabackup_tablespaces
This is the last few lines output of aws s3 cp s3://x-database-backups/db1-new/2025-05-08-16-02-38/full.xbstream - |xbstream -v -x -C test-restore
And in this case, the prepare step completes successfully and I can start mysqld on the results.
1 Like
@toby5box if you xbcloud, it will still use xbstream. observe --stream=xbstream parameter to xtrabackup. xbstream is just a format for streaming.
You have used xtrabackup --backup |xbcloud put..
. To use the backup, you HAVE to use xbcloud again. xbcloud get | xbstream -x -C restore --parallel=8
. After this, you can do prepare.
1 Like
Thanks. I have tested this and it did work:
xbcloud get --storage=s3 --s3-bucket="${S3_BUCKET}" --parallel=${NUM_THREADS} $S3PATH \
| xbstream -x -C $TARGET_DIR --parallel=$NUM_THREADS
xtrabackup --prepare --keyring-file-data=`pwd`/mysql.keyring --no-apply-log --target-dir=$TARGET_DIR
Is there any reason to not to store the xbstream as a single object on S3 versus the xbcloud
many-objects backup? It seems that the single object format (capturing the output of xtrabackup
directly) is more efficient to download.
My logic was poor. It seems that the combination of multi-objects (xbcloud) and multithreading is likely to be faster to transfer. Even as much as 2x in my tests.