Hi,
We recently upgraded our three-server cluster from Percona MySQL 5.5 to 5.6 and have run into problems with read performance on a number of MyISAM tables.
We started on CentOS 6.6 (Percona-Server-server-55-5.5.42-rel37.1.el6.x86_64) and upgrade to Percona MySQL 5.6 on Gentoo (percona-server-5.6.25.73.0-r1)
The upgrade was done in-place, however the configuration was to some extent rebuilt as part of the upgrade.
An example table we’re having problems with would look like this:
mysql> describe content.placement;
±---------------±--------------------±-----±----±--------------------±------+
| Field | Type | Null | Key | Default | Extra |
±---------------±--------------------±-----±----±--------------------±------+
| storyid | int(11) | NO | PRI | 0 | |
| revision | int(11) | NO | PRI | 0 | |
| placementid | int(11) | NO | PRI | 0 | |
| platform | int(11) | NO | MUL | 0 | |
| cat_id | int(11) | NO | MUL | 0 | |
| layout | int(11) | NO | | 0 | |
| priority | int(11) | NO | MUL | 0 | |
| published_date | datetime | NO | MUL | 0000-00-00 00:00:00 | |
| expire_date | datetime | NO | MUL | 0000-00-00 00:00:00 | |
| topstory | tinyint(3) unsigned | NO | MUL | 0 | |
| status | tinyint(4) | NO | MUL | 0 | |
| revisionstatus | tinyint(4) | NO | MUL | 0 | |
| sort | tinyint(4) | NO | MUL | 0 | |
| topsort | tinyint(4) | NO | | 0 | |
| channeltop | tinyint(4) | NO | | 0 | |
| infinity | tinyint(4) | NO | | 0 | |
±---------------±--------------------±-----±----±--------------------±------+
The query that is giving us grief is this one, a relatively simple one:
mysql> explain SELECT SQL_NO_CACHE storyid FROM placement WHERE cat_id IN (88,248,51,250,807,1534,1855,1860,2247,2457,4451) ORDER by published_date DESC LIMIT 50;
±—±------------±----------±------±--------------±---------------±--------±-----±-----±------------+
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra |
±—±------------±----------±------±--------------±---------------±--------±-----±-----±------------+
| 1 | SIMPLE | placement | index | cat_id,catpub | published_date | 8 | NULL | 2246 | Using where |
±—±------------±----------±------±--------------±---------------±--------±-----±-----±------------+
1 row in set (0.00 sec)
Our problem is that this query has a run-time seemingly randomly distributed between 0.05 and 3++ seconds. In 5.5, this query was consistently fast, i.e. less than 0.1s. Another confusing issue is that the chosen key seems to vary from server to server, but also from query execution to query execution.
We believe to have eliminated the underlying system as a source of problems. What we do however see is that the query seems to get stuck in various phases, and we don’t quite understand why:
mysql> show profile for query 35;
±---------------------±---------+
| Status | Duration |
±---------------------±---------+
| starting | 0.000054 |
| checking permissions | 0.000004 |
| Opening tables | 0.000023 |
| init | 0.000021 |
| System lock | 0.000006 |
| optimizing | 0.000004 |
| statistics | 0.191296 |
| preparing | 0.000052 |
| Sorting result | 0.000016 |
| executing | 0.000006 |
| Sending data | 1.426275 |
| end | 0.000000 |
| query end | 0.000013 |
| closing tables | 0.000004 |
| freeing items | 0.000031 |
| cleaning up | 0.000026 |
±---------------------±---------+
16 rows in set, 1 warning (0,00 sec)
Or:
mysql> show profile for query 16;
±---------------------±---------+
| Status | Duration |
±---------------------±---------+
| starting | 0.000033 |
| checking permissions | 0.000004 |
| Opening tables | 0.000040 |
| init | 0.000066 |
| System lock | 0.000009 |
| optimizing | 0.000004 |
| statistics | 0.000035 |
| preparing | 0.000010 |
| Sorting result | 0.000003 |
| executing | 0.000002 |
| Sending data | 0.000004 |
| Creating sort index | 0.014366 |
| end | 0.000005 |
| query end | 0.000004 |
| closing tables | 0.000006 |
| freeing items | 0.000232 |
| cleaning up | 0.000109 |
±---------------------±---------+
17 rows in set, 1 warning (0.00 sec)
Does anyone have any ideas where we should dig further? Our extensive attempts at research and debugging are bringing us nowhere at the moment - thank you in advance for any input.
Best regards
Jan