[ERROR] Shell Script -xtrabackup: Error: cannot mkdir 2

So I have a script that will run a full backup using xtrabackup.
while running the command it works but when i use a shell script i get this error

xtrabackup: recognized server arguments: --datadir=/var/lib/mysql --server-id=1 --log_bin=/mysqllog/binlog/mysql-bin.log --innodb_file_per_table=1 --innodb_buffer_pool_size=2G --innodb_log_buffer_size=100M --open_files_limit=100000 --parallel=4 --open_files_limit=100000 
xtrabackup: recognized client arguments: --backup=1 --user=backup --password=* --extra-lsndir==/backup/ --target-dir=/backup/db_full --compress --compress-threads=4 
221208 19:31:50  version_check Connecting to MySQL server with DSN 'dbi:mysql:;mysql_read_default_group=xtrabackup' as 'backup'  (using password: YES).
221208 19:31:50  version_check Connected to MySQL server
221208 19:31:50  version_check Executing a version check against the server...
221208 19:31:50  version_check Done.
221208 19:31:50 Connecting to MySQL server host: localhost, user: backup, password: set, port: not set, socket: not set
Using server version 5.7.25-log
/usr/bin/xtrabackup version 2.4.24 based on MySQL server 5.7.35 Linux (x86_64) (revision id: b4ee263)
xtrabackup: uses posix_fadvise().
xtrabackup: cd to /var/lib/mysql
xtrabackup: open files limit requested 100000, set to 100000
xtrabackup: using the following InnoDB configuration:
xtrabackup:   innodb_data_home_dir = .
xtrabackup:   innodb_data_file_path = ibdata1:12M:autoextend
xtrabackup:   innodb_log_group_home_dir = ./
xtrabackup:   innodb_log_files_in_group = 2
xtrabackup:   innodb_log_file_size = 50331648
InnoDB: Number of pools: 1
xtrabackup: Error: cannot mkdir 2: /apps/latest_script/scripts/dbbackup/=/backup/

The user-provided path /backup/db_full does not exist.

Hi @_Sumit_Debnath

Can you please share you shell script?

It seems like xtrabackup is recognizing the wrong path:

/apps/latest_script/scripts/dbbackup/=/backup/
1 Like
#!/bin/bash

# Based on https://github.com/woxxy/MySQL-backup-to-Amazon-S3
# Full backups every start of month and week. Differential backups the rest of days.
# Param: auto | month | week | day
# By default: auto
#
# Source the variables script (DRY)
DIR="${BASH_SOURCE%/*}"
if [[ ! -d "$DIR" ]]; then DIR="$PWD"; fi
. "$DIR/variables.sh"

#Week num, from 01 to 53 starting Monday
week_curr=$(date +"%V")
#Week num, from 01 to 53 starting Monday
week_minus2=$(date --date="2 weeks ago" +"%V")
#Week num, from 01 to 53 starting Monday
month_curr=$(date +"%m")
# Month minus 2 (1..12)
month_minus2=$(date --date="2 months ago" +"%m")

DATESTAMP=$(date +"_%Y%m%d_%H%M%S")
# Day: 01-31
DAY=$(date +"%d")
# Day of week: Monday-Sunday
DAYOFWEEK=$(date +"%u")

PERIOD=${1-auto}

if [ ${PERIOD} = "auto" ]; then
	if [ ${DAY} = "01" ]; then
		PERIOD=month
	elif [ ${DAYOFWEEK} = "1" ]; then
		PERIOD=week
	else
		PERIOD=day
	fi	
fi

if [ ${PERIOD} = "month" ]; then
	CURRENT_MINUS2="month_${month_minus2}"
	CURRENT="month_${month_curr}"
elif [ ${PERIOD} = "week" ]; then
	CURRENT_MINUS2="week_${week_minus2}"
	CURRENT="week_${week_curr}"
else
	CURRENT="day_$(date +"%u")"
fi

echo "*************** Selected period: $PERIOD. Current: $CURRENT *************"

echo "*************** Starting backing up the database to a file... ***********"

if [ ${PERIOD} = "week" ] || [ ${PERIOD} = "month" ] ; then
	# Remove previous full-backup from local filesystem
	BACKUP_DIRNAME=${BACKUP_DIRNAME}_full
	rm -rf ${BACKUP_PATH}${BACKUP_DIRNAME}
	# perform backup
    ${PERCONA_BACKUP_COMMAND} --backup --user=${MYSQLROOT} --password=${MYSQLPASS} --extra-lsndir==${BACKUP_PATH} --parallel 4 --target-dir=${BACKUP_PATH}${BACKUP_DIRNAME} --compress --compress-threads=4 --open-files-limit=100000
else
	# Remove previous differential-backup
	echo "*************** Removing previous differential backup dir ***************"
	rm -rf ${BACKUP_PATH}${BACKUP_DIRNAME}
	# perform backup
    lsn=$(awk '/to_lsn/ {print $3}' ${BACKUP_PATH}/xtrabackup_checkpoints)
	${PERCONA_BACKUP_COMMAND} --backup --user=${MYSQLROOT} --password=${MYSQLPASS} --parallel 4 --target-dir=${BACKUP_PATH}${BACKUP_DIRNAME} --compress --compress-threads=4 --open-files-limit=100000 --incremental-lsn=${lsn}
    #${PERCONA_BACKUP_COMMAND} --user=${MYSQLROOT} --password=${MYSQLPASS} --no-timestamp --incremental ${BACKUP_PATH}${BACKUP_DIRNAME} --incremental-basedir=${BACKUP_PATH}${BACKUP_DIRNAME}_full --parallel=4 --use-memory=640M
	# apply logs
	#${PERCONA_BACKUP_COMMAND} --user=${MYSQLROOT} --password=${MYSQLPASS} --no-timestamp --incremental ${BACKUP_PATH}${BACKUP_DIRNAME} --incremental-basedir=${BACKUP_PATH}${BACKUP_DIRNAME}_full --parallel=4 --use-memory=640M --apply-log
fi

echo "*************** Done backing up the database to a file. *****************"
##echo "*************** Starting compression... *********************************"

##echo "tar czf ${BACKUP_PATH}${BACKUP_DIRNAME}${DATESTAMP}.tar.gz -C ${BACKUP_PATH} ${BACKUP_DIRNAME}" 

##tar czf ${BACKUP_PATH}${BACKUP_DIRNAME}${DATESTAMP}.tar.gz -C ${BACKUP_PATH} ${BACKUP_DIRNAME}

##echo "*************** Done compressing the backup file. ***********************"

# upload all databases
echo "*************** Uploading the new backup... *****************************"
#xbcloud put ${BACKUP_PATH}${BACKUP_DIRNAME}${DATESTAMP}.xbstream --storage=s3 --s3-endpoint='s3.ap-south-1.amazonaws.com' --s3-access-key='AKIAVRVKARYQZILBWQ7Y' --s3-secret-key='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' --s3-bucket='leadics-eng-devops-v1-hgd7jh3' --s3-region='ap-south-1' --parallel=10 $(date -I)-full_backup/f
TIME=$(date +%R_+%Z)
aws s3 sync ${BACKUP_PATH}${BACKUP_DIRNAME} s3://${S3BUCKET}/${S3PATH}${CURRENT}/${TIME}
echo "*************** New backup uploaded. ************************************"

# Remove old backups from 2 periods ago, if period is month or week, plus daily differential backups
if [ ${PERIOD} = "week" ] || [ ${PERIOD} = "month" ] ; then
	echo "Removing old backup (2 ${PERIOD}s ago)..."
	aws s3 rm --recursive s3://${S3BUCKET}/${S3PATH}${CURRENT_MINUS2}/
	echo "Old backup removed."
	echo "Removing daily differential backups..."
	week_days=(day_1 day_2 day_3 day_4 day_5 day_6 day_7)
	for i in "${week_days[@]}"
	do
		echo "Removing $i"
		aws s3 rm --recursive s3://${S3BUCKET}/${S3PATH}${i}/
	done
fi

# echo "*************** Removing the cache files... *****************************"
# # remove compressed databases dump
# rm ${BACKUP_PATH}${BACKUP_DIRNAME}${DATESTAMP}.xbstream
# echo "*************** Cache files removed. ************************************"
# echo "All done."
1 Like

check on line 60:

    ${PERCONA_BACKUP_COMMAND} --backup --user=${MYSQLROOT} --password=${MYSQLPASS} --extra-lsndir==${BACKUP_PATH} --parallel 4 --target-dir=${BACKUP_PATH}${BACKUP_DIRNAME} --compress --compress-threads=4 --open-files-limit=100000

–extra-lsndir==${BACKUP_PATH} has two = signs, while it should be only one.

1 Like

What a silly mistake :sweat_smile:.
Thanks a lot

1 Like