Hi, Francisco, thanks for responding. I’m running 5.1.6-19 I looked a little bit at the bug and it appears (to me) that I am using the default location.
[mysql@mitbimysql ~]$ cat /etc/my.cnf | grep bin
binlog_format=mixed
log-bin=bi-bin
-rw-rw-r-- 1 mysql mysql 16 Feb 11 18:15 bi-bin.index
-rw-rw---- 1 mysql mysql 472167695 Feb 12 14:05 bi-bin.000263
[mysql@mitbimysql ~]$ pwd
/var/lib/mysql
I have a 100G drive to write a 26gig database and my /tmp is part of / so if I filled it up during a backup it would have way worse ramifications.
I have cloned the vm and I can start and stop mysql on the clone but can’t get the backup to run to completion there either.
ANY, suggestions would be appreciated. Thanks, Dean
FYI, This is my exact backup error - (note the last table that if fails on changes about each time)
Copying ./resourcespace/user_rating.ibd
to /backup/mysql/data/2021-02-11_15-34-29/resourcespace/user_rating.ibd
…done
Copying ./resourcespace/resource.ibd
to /backup/mysql/data/2021-02-11_15-34-29/resourcespace/resource.i210211 15:37:34 InnoDB: Operating system error number 9 in a file operation.
InnoDB: Error number 9 means ‘Bad file descriptor’.
InnoDB: Some operating system error numbers are described at
InnoDB: http://dev.mysql.com/doc/refman/5.1/en/operating-system-error-codes.html
InnoDB: File operation call: ‘close’.
InnoDB: Cannot continue operation.
Checking the logs with one of our system admins we did not find any relatable error(s).
Wow, this is pretty old version and a number of bugs can be related, please check if the ulimit (amount of allowed file descriptors per user) is not causing the issue.
Depending on OS it can be configured differently but you can try ulimit -n 40000
Great Idea. We upped the ulimit once from 1024 to 2048 and it fixed this type of an error. We just tried doublign it again and it did not help. I will suggest we increase to 40000 and se what I can get the SA to do. Thanks.
Hey Francisco, I have a more serious prod problem that I discovered when comparing this server to our “main” production server. The main one appears to have a different silent backup error.
log scanned up to (25918851733043)
xtrabackup: Generating a list of tablespaces
2021-02-12 12:50:43 7fa4d12d0720 InnoDB: Operating system error number 13 in a file operation.
InnoDB: The error means mysqld does not have the access rights to
InnoDB: the directory.
innobackupex: got a fatal error with the following stacktrace: at /twks/bin/innobackupex line 2711
main::wait_for_ibbackup_file_create(‘/work/BACKUP/mysql/2021-02-12_12-50-42/xtrabackup_suspended_2’) called at /twks/bin/innobackupex line 2731
main::wait_for_ibbackup_suspend(‘/work/BACKUP/mysql/2021-02-12_12-50-42/xtrabackup_suspended_2’) called at /twks/bin/innobackupex line 1984
main::backup() called at /twks/bin/innobackupex line 1609
innobackupex: Error: The xtrabackup child process has died at /twks/bin/innobackupex line 2711.
find: `/work/BACKUP/mysql/20*/mysql-stderr’: No such file or directory
It only generates two small files
[username@XXXXXXXX 2021-02-12_12-50-42]$ ls
backup-my.cnf xtrabackup_logfile
Other than removing my lost+found owned by root, I haven’t found anything that seems viable yet. Do you have any suggestions?
This looks like a permission error, either you run xtrabackup with sudo or from mysql user, normally I execute xtrabackup as root to avoid permission errors on files.
Hi Francisco, Good news the second problem was fixed by changing the perms on ‘lost+found’ dir.
Now, I still need to figure out what is causing this
Copying ./resourcespace/resource.ibd
to /backup/mysql/data/2021-02-11_15-34-29/resourcespace/resource.ibd
210211 15:37:34 InnoDB: Operating system error number 9 in a file operation.
InnoDB: Error number 9 means ‘Bad file descriptor’.
InnoDB: Some operating system error numbers are described at
InnoDB: http://dev.mysql.com/doc/refman/5.1/en/operating-system-error-codes.html 1
InnoDB: File operation call: ‘close’.
InnoDB: Cannot continue operation.
Any thoughts or ideas would be greatly appreciated.
D
No the mountpoint is not a network unit. But, I built a new system with a new OS and newer copy of percona. I took a mysqldump and then imported it into the new system and it backed up fine. Since everything is so old on the original system - I am just going to migrate the customer to the new system once they test out their applications. Thanks for your support. D