Help diagnosing ERROR 2013 (HY000): Lost connection to MySQL server during query

I’m running percona 8.0.15 on RHEL6 (fully patched).

I have a large database and I’m trying to do a query (fairly complex and expected to take a while). It runs for (exactly?) 300 seconds and then the client exits with:

ERROR 2013 (HY000): Lost connection to MySQL server during query

Nothing shows up in any logs (either system or MySQL logs). After a number of searches, I have tried increasing several parameters including:

net_read_timeout=600
net_write_timeout=180
wait_timeout=86400
interactive_timeout=86400
max_allowed_packet=8192M
max_execution_time=600000

and none of them modified the behavior (i.e. they all continued to exit at 300s). The load on the server does not seem to be seriously impacted (i.e. CPU and memory remain more or less contant).

I’ve looked through all of the variables and do not see any timeout that is set to 300s/5min, so at this point, I’m just looking for pointers on where to look next.

Thanks

Hi, I am experiencing the exact same error with our full and incremental backups.

I have tried increasing timeout values just as you have with no success.

This started happening when one of our tables containing BLOB column data grew from 80GB up to over 100GB in size.

The ‘Lost connection’ message occurs just after the copy of the table data for the large table, like this:

191013 07:20:35 >> log scanned up to (132055253021)
191013 07:20:36 >> log scanned up to (132055253021)
191013 07:20:36 [04] …done
191013 07:20:37 >> log scanned up to (132055253021)
Error: failed to execute query SET SESSION lock_wait_timeout=31536000: Lost connection to MySQL server during query

If we exclude the large table from the backup everything works as before.

What we see when we include the large table is that the SQL connection created by the xtrabackup client is stuck on sleep and never disconnects, even though the xtrabackup client exits with the ‘Lost connection to MySQL server during query’ message.

Did you find a solution to your problem?

As a matter of interest, is your very large table using the INNODB storage engine?
I ask because for MYISAM (and other storage engines), then Percona Xtrabackup will take a table lock for backup.
Other than that, though, please provide versions of the software and info about the environment - I may move you to a new thread.

Hi there

Did you resolve this? We’ve just had a backup that ran for years start failing with this error about a week ago.

We’re running 5.7 and 2.4.28 in Docker.

I thought it might be something to do with table size, and we had a huge table that was a temporary backup that I got rid of.

It’s worked once since I got rid of the big table but it’s gone back to failing again

Any clues gratefully received

Thank you

1 Like