180805 11:30:40 version_check Connecting to MySQL server with DSN 'dbi: mysql:; mysql_read_default_group = xtrabackup' as 'debian-sys-maint' (using password: YES).
180805 11:30:40 version_check Connected to MySQL server
180805 11:30:40 version_check Executing a version check against the server ...
#A software update is available:
180805 11:30:42 version_check Done.
180805 11:30:42 Connecting to MySQL host: localhost, user: xxxxxxxx, password: set, port: not set, socket: not set
Using server version 5.7.23-0ubuntu0.16.04.1
(here ends the protocol, even if you wait a little)
At the same time, the database can not be used by other applications.
If xtrabackup is aborted, it will take a moment before the other applications can access the database again.
I wonder that’s the issue here? If you could check, and also come back with the version of Percona XtraBackup and confirm other software/environment information, then I will see if I can get someone to help you with this.
Ooo I am sorry, I am usually careful not to use our in-house short forms! But yes, by PXB I meant Percona XtraBackup.
Did you see if you had any tables that are not either InnoDB or XtraDB storage engine? For example, if you have any MyISAM tables, then Percona XtraBackup will take a lock on those tables while they are being backed up.
If the tables are still in use by your application, it may not be possible for it to take the lock, and then you could potentially get the symptom you are experiencing. Let me know and if this is not the case in this instance, I will ask the technical team if there are other scenarios for your problem.
It is most likely due to the attempts to take locks for the non-InnoDB tables towards the end of the backup process. The first thing I should say is please don’t just go and start changing the storage engines for the non-InnoDB tables, as there’s much more to it than that. But…
You could (feasibly) have a situation along these lines:
[LIST]
[]Your application locks table table A (which is not an InnoDB table)
[]Percona XtraBackup locks table B ready for backup and tries to take a lock on table A
[]Your application meanwhile tries to lock MyISAM table B - now it can’t do that because it’s locked elsewhere, so that gets stuck too.
[]So now they are stuck waiting for each other
[/LIST] On other occasions, if there are no transaction locks on table A, then Percona XtraBackup would get the locks on tables that it needs in order to complete the job, and everything will finish nicely.
I will, though, see if I can get one of the tech team in case I have missed the point, and they might also have information on what to do about it for you. I don’t know what your situation is (size of company, etc.) but it might be that a database audit consultancy could help put your applications and backups on track, so that they are easier to maintain going forward.
Meanwhile, let’s see if we have any other advice… and here is a blog post about storage engines and when/why you should change:
Hello again Raba, I’ve been consulting with some of the Percona XtraBackup team. They mentioned that if Percona XtraBackup has started and is backing up InnoDB tables successfully they’d expect to see something like this before the process ‘hangs’ for locks:
180904 18:02:07 Connecting to MySQL server host: localhost, user: root, password: set, port: not set, socket: /var/lib/mysql/mysql.sock
Using server version 5.7.22-22
./percona-xtrabackup-2.4.12-Linux-x86_64/bin/xtrabackup version 2.4.12 based on MySQL server 5.7.19 Linux (x86_64) (revision id: 9fc7f34)
xtrabackup: uses posix_fadvise().
xtrabackup: cd to /var/lib/mysql
xtrabackup: open files limit requested 0, set to 1024
xtrabackup: using the following InnoDB configuration:
xtrabackup: innodb_data_home_dir = .
xtrabackup: innodb_data_file_path = ibdata1:12M:autoextend
xtrabackup: innodb_log_group_home_dir = ./
xtrabackup: innodb_log_files_in_group = 2
xtrabackup: innodb_log_file_size = 50331648
InnoDB: Number of pools: 1
180904 18:02:07 Starting to copy logfile from the LSN 2824727660.
If you don’t see something like that, then this would suggest that Percona XtraBackup did not even start successfully in the cases where it does not complete. In this case, then, you would have to consider
[LIST]
[]Do you get the hangs only when using XtraBackup?
[]Is it possible that there is a different issue that XtraBackup is exposing to you. For example do you have logs or other evidence that show that the application can connect ok throughout the day? They are wondering if there’s a chance that there is often an issue with your application, but you are only experiencing it yourself when you try to run backups.
[*]Given that the backup succeeded once, then it would suggest there is something happening at the time when it doesn’t succeed. I know that’s obvious, but …
[/LIST] The problem with the these kind of questions, though, is that the answer is likely to be very specific to your installation… at that point, it’s not possible for us to explore 1:1 in the open source forum, it’s more something our consultancy branch would take on.
[/B]
is reached, then everything else runs smoothly. If xtrabackup hangs, then that happens before that.
So I think locking MyISAM tables is not the cause.
Interestingly, from time to time, many of the following messages appear in the mysqld’s error.log:
2018-09-05T21: 39: 04.023325 + 01: 00 0 [Warning] InnoDB: A long semaphore wait:
--Thread 140514665477888 has waited at dict0dict.cc line 1238 for 267.00 seconds the semaphore:
Mutex at 0x2dcd008, Mutex DICT_SYS created dict0dict.cc:1172, lock var 1
Then follows:
InnoDB: ###### Starts InnoDB Monitor for 30 secs to print diagnostic info:
and many lines.
But that had no effect so far. I also do not know what to do about it.
My provider has changed something in the memory management of the v-server, after I have told him of other strange phenomena. Since then, there are no more problems with the use of percona.