I have an update that is taking a huge amount of time, and I can not see the reason why.
From the slow query log I see :
User@Host: root[root] @ xx-xx-xx-xxx
Thread_id: 41664 Schema: palio_demo Last_errno: 1205 Killed: 0
Query_time: 51.302989 Lock_time: 0.000082 Rows_sent: 0 Rows_examined: 0 Rows_affected: 0 Rows_read: 1
Bytes_sent: 67 Tmp_tables: 0 Tmp_disk_tables: 0 Tmp_table_sizes: 0
InnoDB_trx_id: DA7EF817
SET timestamp=1346437934;
UPDATE ad_network_ad_groups
SET ad_network_task
= NULL WHERE id
= 544632;
The table itself is not overly large 480K rows (not that it should matter on a primary key update)
The table looks like :
mysql> show create table ad_network_ad_groups \G
*************************** 1. row ***************************
Table: ad_network_ad_groups
Create Table: CREATE TABLE ad_network_ad_groups
(
id
bigint(20) NOT NULL AUTO_INCREMENT,
…
… About 20 columns
…
PRIMARY KEY (id
),
(a couple other indexes here, but the updated column is not indexed)
) ENGINE=InnoDB AUTO_INCREMENT=565287 DEFAULT CHARSET=latin1
1 row in set (0.00 sec)
There is an update trigger on the table, and I can post the code if need be, but it checks for the change to a couple columns, and if it finds it will insert a journal record, but the column being updated is not one of the 3 the trigger is trapping.
This is running on AWS, and I’ve taken a snapshot of the disks and created a test environment to test as to why this might be happening, but can’t reproduce it in test, runs in .01 seconds there (which is what I would expect in prod).
Any idea what I could check next ?