Is mysqldump creating additional files in /tmp?

Hello

I’ve got a problem with my server, which is probably connected to the backup script. The problem looks like this: about one minute after the bash script with the mysqldump is executed my server stops delivering the dynamic pages (because of no answer from the MariaDB, static files are being delivered without any problem at all), no logs are being created and I can’t even log in to the ssh. Everything gets back to normal after the reset. The database itself isn’t very big, it has almost 50GB and on my hard drive, there is 100GB of free space. After the last “crash” I’ve checked the size of the file that was created by mysqldump and it’s almost 5GB, so if the problem really is with space, then something is eating 95GB and that file (or files) are inside of /tmp/ directory. Am I right?

How can I check why it is happening? Is there something that I could do, to prevent the hard disk from getting to 100% usage while I’m creating the database’s backup?

And maybe one more question, not related to the mysqldump directly, is there something that I can set, so ssh won’t get messed up when the disk is full?

As for the server itself, it’s Debian Stretch with MariaDB v.10.1.26. The dump line contains the settings: all databases, 3x ignore table and default character set.

Edit: One more thing that could be important. There is a mix of engines. Some of the tables are innodb, and some are myisam.

Hi Januzi,

If DB size is 5 GB and you have 100 GB free disk space, then the problem is not likely due to the backup size itself.

For monitoring if files are created on /tmp dir, you can use inotify watch before running the script:

wget https://download-ib01.fedoraproject.org/pub/epel/7/x86_64/Packages/i/inotify-tools-3.14-9.el7.x86_64.rpm

rpm -Uvh inotify-tools-3.14-9.el7.x86_64.rpm

inotifywait -m -r -e create /tmp

Above snippet will check for any file creation under /tmp.

If server gets unresponsive, i would say that it might likely be related to resource saturation (maybe high CPU usage). I would suggest you set up a monitoring script that collects information on a 1 second sample and start it before running the backup, so that after the issue, you can review outputs to find the bottleneck or resource saturation.

PTDEST=“/tmp/outputs”;

MYSQL=“mysql -uroot -pSekr3tr”;

[ -d “$PTDEST” ] || mkdir $PTDEST;

while true; do {

[ -f /tmp/exit-percona-monitor ] && echo “exiting loop (/tmp/exit-percona-monitor is there)” && break;

d=$(date +%F_%T |tr “:” “-”);

$MYSQL -e “SHOW ENGINE INNODB STATUS\G” > $PTDEST/$d-innodbstatus;

netstat -s > $PTDEST/$d-netstat_s;

ps faux > $PTDEST/$d-ps;

df -h > $PTDEST/$d-df;

top -bn1 > $PTDEST/$d-top;

vmstat 1 10 > $PTDEST/$d-vmstat &

iostat -dmx 1 10 > $PTDEST/$d-iostat &

mpstat -P ALL 1 10 > $PTDEST/$d-mpstat &

sar -n DEV 1 10 > $PTDEST/$d-sar_dev &

sar -n TCP,ETCP 1 10 > $PTDEST/$d-sar_tcp &

sleep 10.1; # add 100ms to avoid overstepping

} done;

I would suggest you take a backup of just one database at a time instead of all databases to identify the problem.

Regards,

Hello

The whole db has got 50GB and the dump file itself reaches only 5GB. It can’t be cpu saturation, because Apache wouldn’t work as well (I know that, because I’ve accidentally ran several hundreds of Gnuplot at the same time).

Thank you for the script. This could help in finding the main problem.

As for single db backup, this could also work. I’d have to change my script a little, so it would split work into a smaller tasks, but this is doable.