Hi!
I have percona server 5.5 on centos 6.
For some reason after migration from Debian 6 I noticed very high %system usage. My database is in MyISAM. datadir is on SSD drive and filesystem is ext4. Disk IOwait is negligible.
mysql> explain SELECT i.inumber as itemid, i.ititle as title, i.ibody as body, m.mname as author, m.mrealname as authorname, UNIX_TIMESTAMP(i.itime) as timestamp, i.itime, i.imore as more, m.mnumber as authorid, c.cname as category, i.icat as catid, i.iclosed as closed FROM nucleus_item as i USE INDEX(TDB-T), nucleus_member as m, nucleus_category as c WHERE i.iauthor=m.mnumber and i.icat=c.catid and i.iblog=1 and i.itime<=“2011-09-03 22:13:17” and i.idraft=0 ORDER BY i.itime DESC LIMIT 8941,9;±—±------------±------±-------±--------------±--------±--------±--------------±------±------------+| id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra |±—±------------±------±-------±--------------±--------±--------±--------------±------±------------+| 1 | SIMPLE | i | range | TDB-T | TDB-T | 13 | NULL | 67306 | Using where || 1 | SIMPLE | c | eq_ref | PRIMARY | PRIMARY | 4 | db.i.icat | 1 | || 1 | SIMPLE | m | eq_ref | PRIMARY | PRIMARY | 4 | db.i.iauthor | 1 | |±—±------------±------±-------±--------------±--------±--------±--------------±------±------------+3 rows in set (0.00 sec)
key_buffer = 16M
That value seems very low for a MyISAM server.
query_cache_size = 160M
That value seems to be a bit high. If you have many queries/sec, try disabling the query cache altogether.
Your query has a high LIMIT offset, but the response time is still very low, so I think neither of these is the true cause of your problem. Do you also have oprofile output?
It seems like I can’t see performance problems now. Can’t figure out what caused performance issues.
Anyway thanks. I’ll play with query_cache_size a bit more.
I find that server load and looking at metrics like top and sysstat and those types of things being really deceptive; Its not what I really care about (though they’re still important).
For me, the most important thing is ‘response time’. I need to know that my important queries return within X milliseconds.
melco wrote on Sat, 03 September 2011 13:00
Keep in mind the following.
You have 1 query that runs 10000 times a minute that takes a millisecond to execute. Then you have a query that is run 1 time a minute that takes 5 seconds- eg the heavy query.
If you optimise that second query by adding an index on the same table, its possible that the first query can be twice as slow therefore impacting you overall performance.
So keep that in mind as just looking at a single query doesnt really reflect whats going on and what will provide a really positive win for you.
What I think you can do and what will provide a big win is if you have a MySQL (or Percona) server that has microsecond slow query logging doing the following:
(Keep in mind there will be a small mysql overhead and IO overhead for the logs with this)
enable the mysql slow query log
set the long_query_time to 0
let this run for a period of time that reflects the workload.
Run mk-query-digest over the slow query log to highlight the most expensive queries.
mk-query-digest will call out those queries that are run over and over - as those queries are the ones that should be the focus of optimisation as they will provide the biggest win.
Shoot me a tweet (@mysqldbahelp) if you need further details
You say that you have migrated from Debian 6 to CentOS 6, did you have the same hardware and mysql configuration on Debian 6 as you have now?
And as gmouse said, considering you have mostly MyISAM tables you should consider increasing the key cache size. You can query the information_schema table to find out the index lengths for the heavily used tables and increase the key cache size to at least that much.
Finally, considering your query:
mysql> explain SELECT i.inumber as itemid, i.ititle as title, i.ibody as body, m.mname as author, m.mrealname as authorname, UNIX_TIMESTAMP(i.itime) as timestamp, i.itime, i.imore as more, m.mnumber as authorid, c.cname as category, i.icat as catid, i.iclosed as closed FROM nucleus_item as i USE INDEX(TDB-T), nucleus_member as m, nucleus_category as c WHERE i.iauthor=m.mnumber and i.icat=c.catid and i.iblog=1 and i.itime<=“2011-09-03 22:13:17” and i.idraft=0 ORDER BY i.itime DESC LIMIT 8941,9;
That is not particularly a good query, MySQL would still be reading all the 8941 and skip them and send you only the 9 you requested. You can make some improvements in the query by using somekind of a rowid instead of using the LIMIT like this.