What can cause a MEMORY table (which I understood to be very fast) to regularly update slowly with queries similar to:
UPDATE aliro_session SET time = ‘1268959445’, marker = marker+1 WHERE session_id = ‘9215b8263dd679e2ffd78a39a8253e18’ AND isadmin = 0 AND time > 1268958545
The field session_id is the primary key, so finding the record should be simple. And none of the fields in the table is too complicated:
CREATE TABLE IF NOT EXISTS
time int(11) NOT NULL default ‘0’,
session_id char(32) NOT NULL,
isadmin tinyint(3) unsigned NOT NULL default ‘0’,
guest tinyint(4) NOT NULL default ‘1’,
userid int(11) NOT NULL default ‘0’,
gid tinyint(3) unsigned NOT NULL default ‘0’,
marker smallint(6) NOT NULL default ‘0’,
ipaddress varchar(15) NOT NULL, KEY
session_id USING HASH (
session_id)) ENGINE=MEMORY DEFAULT CHARSET=utf8;
Is there some reason why this would regularly crop up as a slow-ish query?
In the context of your other message, I would start to suspect that your server is globally suffering from something, and it’s not the query itself that’s slow, but collateral damage caused by something else happening.
That could be - although it rather undermines the advice to put optimization effort into slow queries. How do you pick out the queries that are slow on their own account as against those that are slow because of pressure on the server (which is also a web server running CMSs that involve significant PHP code execution)?
During a period when the server was not running very well, the number of queries taking more than 0.5 seconds was less than 1%. So maybe there isn’t really a problem of slow queries at all.
But what remains puzzling is that there is still a definite and fairly consistent pattern to which queries are relatively slow. Some of these make sense, because they are relatively complex queries that do a lot of work (most have been worked on and don’t seem to have much scope for improvement). Others are hard to understand - they are queries that normally execute very quickly indeed, yet somehow turn up as slow queries on a regular basis.
Is there anything to be found in all this, or is at a pointless pursuit?
MySQL doesn’t have good metrics to help you work on problems like this. You have to do a bunch of manual work to correlate queries in the log, system activity, and so on. You need to be gathering a lot of information about everything you can. The Percona enhancements to the slow query log can help a LOT here.
The pattern is fairly easy to theorize about, but hard to prove. For example, it makes sense that the queries that suffer might be accessing data that is locked by another query that’s running slowly or has a high arrival rate. One of the techniques I sometimes use for cases like this is to get a GDB stack trace of the whole server when it’s doing badly and aggregate that.
You might have a hard time figuring it out from the slow query log – re-running the query doesn’t always show you anything meaningful about what’s in the log. The log was then, this is now.