Metadata Locking issue with percona 5.5

The short story is that I have a magento installation.

It ran on a mysql 5.1.41 (ubuntu). It was crashing every now and then.
(when trying to insert or delete from certain tables that ware running sone DDL statements like truncate and alter table).

I upgraded to mysql 5.5.15 (from source), and the crashes became rarer … but still happening. So I moved to Percona 5.5.20-rel24.

No I no longer had crashes in the last month or so, but I am running (at each reindex) into another issue.

| Id | User | Host | db | Command | Time |
State |

| Rows_sent | Rows_examined | Rows_read |
±-------±------±--------------------±------±--------±- ----±--------------------------------±-------------------- ------------------------------------------------------------ ---------------------±----------±--------------±--------- -+
| 223351 | mydatabase | www.local:40590 | mydatabase | Query |
372 | Waiting for table metadata lock | TRUNCATE TABLE
| 0 | 0 | 1 |
| 224815 | mydatabase | www.local:52419 | mydatabase | Query |
92 | Waiting for table metadata lock | TRUNCATE TABLE
| 0 | 0 | 1 |
| 225117 | mydatabase | www.local:52467 | mydatabase | Query |
37 | Waiting for table metadata lock | TRUNCATE TABLE catalogsearch_result

The problem I have is that no other running query is related to catalogsearch_result. And no lock is shown in innodb status.

Am am somewhat confused.

As I read here : ck-and-peclmysqlnd_ms/

I know it is possible to provoke metadata lock using an valid sql on a nonexisting table . But that’s not the case here. Even if is a rather unexpected behaviour and rogue behaviour, I have no DROPS and no operations with nonexisting tables…

The bug seems to be a lot like :
and this

Here is also a innodb status :

First thing would be … How do I see what query issued a metadata lock ? Second would be of course … how do I avoid metadata lock issues.

There are a couple of ways to get more information on locking. One is “mysqladmin debug”. Unfortunately there isn’t as much instrumentation as I’d like in this area (yet). I think that the changes to locking in 5.5 solved a lot of problems that are pretty serious in 5.1, but as you’re seeing, there is still a possibility of such problems.

You are pre-filtering the data you’re showing to some extent; the question is not only whether any other running query is related to the table you’re trying to truncate, but also whether any open connection (open transaction) has a lock. It may also be waiting on a global lock (I don’t recall off the top of my head) instead of a table lock that’s specific to this table. Please check the manual and see if you see more information on that; if not, source code investigation may be needed.

I would suggest that you may consider not using TRUNCATE so heavily. It’s not what you “ought” to have to do to make this work well, but unfortunately we know that schema/metadata handling in the server is not as sophisticated as the storage-engine locking that InnoDB does, and you are more likely to run into such problems.

Hi Baron and thank you for your reply.

Here is a more detailed “explanation” of what is happening :

I have a suspicion that this behaviour is caused by some exclusive global lock (as not even SELECTS no longer work). My suspicion strongly is tied to the way mysql handles the inter-engines metadata locks . I cannot even use phpmyadmin on OTHER databases during the “blackouts” ).

I have currently no idea about how the “catch” the perpetrator query that effectively locks me out of the database. Until now, explain, show innodb status and show full processlist helped, but now seems that I ran into a deeper problem which I do not know how to debug (especially in a production environment).

Also, if it helps, on older mysqls (5.1.40, 5.5.15) i had problems with random (every two weeks or so) crashes. This is the reason I upgraded up to latest Percona. It no longer crashes every other week, but locking me out once or twice every day for 8-10-15 minutes is annoying.

If it helps improve the software / confirm a bug I am open to send any info you need. I am working to determine a scenario where this happens, so I can reproduce the bug. Right now the whole thing seems random :frowning: . So I am open to any suggestions. (where to look, what tools to use to get more data, anything at all ).