About 1 in 20 UPDATE statements on my busy db box take about >60-70 seconds to run (on any table). Turning the profiler on shows that it spends all its time trying to lock the table. The processor doesn’t spike at this point either.
Looking at the slow query log, I don’t see any selects taking longer than 10 seconds.
I can’t think of any other reason for this update to wait so long on a lock. Is there a recommended way to debug this kind of perf hit? Am I missing something?
Some more info:
All my tables are on the MyISAM storage engine. The one that I profiled only has about 140 rows, and the update is using a primary key in the where clause (
update table set ... where id = const), so it should be instantaneous.
Here’s a dump of
show global status like '%table%':
- Open_tables: 175
- Opened_tables: 7458
- Table_locks_immediate: 8172412
- Table_locks_waited: 7880
My key_buffer_size is at 15 gigs (25% of the available 64 gigs), and my table_cache is set to 656 (80 connections * 4 joins max * 2 file descriptors). I have query cache turned on with 200 megs available (hit ratio of around 4).
The machine I’m running the db on is an 8-core machine with 64 gigs of ram. I have about 80 persistent connections each running mostly select statements.