Table locks take too long for MyISAM table

Hi all,

About 1 in 20 UPDATE statements on my busy db box take about >60-70 seconds to run (on any table). Turning the profiler on shows that it spends all its time trying to lock the table. The processor doesn’t spike at this point either.

Looking at the slow query log, I don’t see any selects taking longer than 10 seconds.

I can’t think of any other reason for this update to wait so long on a lock. Is there a recommended way to debug this kind of perf hit? Am I missing something?

Some more info:

All my tables are on the MyISAM storage engine. The one that I profiled only has about 140 rows, and the update is using a primary key in the where clause (update table set ... where id = const), so it should be instantaneous.

Here’s a dump of show global status like '%table%':

  • Open_tables: 175
  • Opened_tables: 7458
  • Table_locks_immediate: 8172412
  • Table_locks_waited: 7880

My key_buffer_size is at 15 gigs (25% of the available 64 gigs), and my table_cache is set to 656 (80 connections * 4 joins max * 2 file descriptors). I have query cache turned on with 200 megs available (hit ratio of around 4).

The machine I’m running the db on is an 8-core machine with 64 gigs of ram. I have about 80 persistent connections each running mostly select statements.

The 140 rows table that you are trying to lock, is it involved in a lot of JOIN’s with bigger tables?

How many query’s per second do you have?

What is your general CPU load?

My guess is that this table is involved in a lot of queries.
And when it comes to table locking you have two kinds of locks,
read lock and write lock, and since several processes can read at the same time but only one at a time can write you maintain two queues for lock requests one for reads and one for writes.
And since several processes can read at the same time, reads are usually granted directly if there are already a read lock on the table.
This means that if you have a table that is read all the time you can get starvation of write locks since the read locks will run one in another all the time. Causing the effect that you are experiencing.

This is one of the drawbacks with table level locking that MyISAM uses, changing to InnoDB with row level locking will most probably speed up the UPDATE statements for you.

cpu utilization hovers around 0.5% (across all 8 cores). i have about 80 connections each funneling in about 17 select queries a second (~80% of all data manip statements are selects).

it looks like innodb is the way to go. i guess i should have known this already, but i’ve never seen an update get starved so long…

thanks for the help.

Out of curiosity:
How large is your DB?

You mentioned that longest query took about 10 seconds but do you have a feel for how long the average queries take?

Because even thought you have a very hefty server I think you seem to have a pretty low CPU load if your queries should take a long time.

Doing an ls -lh of the database directory says that it’s about 259M. Running show table status on the table in question says that it’s about 90K.

After switching all the tables to InnoDB by running ALTER {table} ENGINE = InnoDB, I see a jump in CPU usage – now it’s around 15% averaged among all cores. SELECT queries on average (in my limited testing) take less than a second (some are joining on the table in question).

HOWEVER, I’m still seeing a “Table lock” in the profiler of >70 seconds. Here’s a full dump:

  • (initialization): 0.000164
  • checking permissions: 0.000015
  • Opening tables: 0.000017
  • System lock: 0.000013
  • Table lock: 78.990882
  • init: 0.000166
  • Updating: 0.000383
  • end: 0.001585
  • query end: 0.000013
  • freeing items: 0.00002
  • closing tables: 0.000012
  • logging slow query: 0.000007

The query is of the form:
UPDATE table_in_question SET … WHERE id = CONSTANT

Here’s the innodb lock section from mysqlreport (via hackmysql.com):
__ InnoDB Lock _________________________________________________________
Waits 33 0.0/s
Current 0
Time acquiring
Total 171 ms
Average 5 ms
Max 11 ms

And here are my innodb_* variables:

| innodb_additional_mem_pool_size | 524288000 | | innodb_autoextend_increment | 8 | | innodb_buffer_pool_awe_mem_mb | 0 | | innodb_buffer_pool_size | 34359738368 | | innodb_checksums | ON | | innodb_commit_concurrency | 0 | | innodb_concurrency_tickets | 500 | | innodb_data_file_path | ibdata1:2G:autoextend | | innodb_data_home_dir | | | innodb_doublewrite | ON | | innodb_fast_shutdown | 1 | | innodb_file_io_threads | 4 | | innodb_file_per_table | ON | | innodb_flush_log_at_trx_commit | 1 | | innodb_flush_method | | | innodb_force_recovery | 0 | | innodb_lock_wait_timeout | 50 | | innodb_locks_unsafe_for_binlog | OFF | | innodb_log_arch_dir | | | innodb_log_archive | OFF | | innodb_log_buffer_size | 8388608 | | innodb_log_file_size | 1073741824 | | innodb_log_files_in_group | 2 | | innodb_log_group_home_dir | ./ | | innodb_max_dirty_pages_pct | 90 | | innodb_max_purge_lag | 0 | | innodb_mirrored_log_groups | 1 | | innodb_open_files | 300 | | innodb_rollback_on_timeout | OFF | | innodb_support_xa | ON | | innodb_sync_spin_loops | 20 | | innodb_table_locks | ON | | innodb_thread_concurrency | 0 | | innodb_thread_sleep_delay | 10000 |

Any idea why this update is STILL getting starved?

Very strange, could there by any chance be some part of your application that uses the “LOCK TABLES xxx” statement.

Or that you have a backup like mysqldump running as a cron job very often in the background?

Because you seem to have a pretty small DB on a very hefty server.

BTW depending on your disk/RAID hardware you could consider changing to:
innodb_flush_log_at_trx_commit=2

Because otherwise you could get problems with that you can only handle about 150 commits per second due to disk seek times.

That was it!

I was running a mysqldump every hour at the same time I was running the update script. Adding --single-transaction to the mysqldump script solves the issue. I’ve also staggered the scripts so that they don’t run at the same time.

Thanks for your help!

About innodb_flush_log_at_trx_commit, the mysql docs are pretty unclear about how it affects commit performance. What RAID setup should I have in order to use innodb_flush_log_at_trx_commit=2?