How to tune query if the execution plan already looks perfect

Hi,

I have noticed quite a few queries appear in the slow query log. I then do a explain on those queries, however, the execution plan for these queries look OK to me.

This is one of the query:

select handset0_.id as id0_, handset0_.version as version0_,
handset0_.imei as imei0_, handset0_.model as model0_,
handset0_.variant as variant0_
from handset handset0_
where handset0_.id=9200;

(please execute the weired column naming as it’s a prepared statement from Hibernate.)

±—±------------±----------±------±--------------±— -----±--------±------±-----±------+
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra |
±—±------------±----------±------±--------------±— -----±--------±------±-----±------+
| 1 | SIMPLE | handset0_ | const | PRIMARY | PRIMARY | 4 | const | 1 | |
±—±------------±----------±------±--------------±— -----±--------±------±-----±------+

What else should I look to top this query appear in the slow query log?

Thanks,
Aimee

Is it MyISAM or Innodb tables ?

For MyISAM tables it could also be table locking for Innodb trivial queries in slow query logs usually mean system overload at that point in time. For example query could be waiting in queue to enter Innodb kernel.

First of all, thank you for the reply.

We are using Innodb tables.

for Innodb trivial queries in slow query logs usually mean system >overload at that point in time. For example query could be waiting >in queue to enter Innodb kernel.

Then what areas can I look at to further troubleshoot this?

Also After reading the show-innodb-status-walk-through page, I am thinking to do make the following changes:

  1. increase innodb_sync_spin_loops from 20 to 40 since we were only using 14% of CPU on the db server when we were doing load testing.

  2. decrease innodb_thread_concurrency from 8 to 4 as we only have 2CPUs on the db server.

Do you think this sound reasonable?

Thanks in advance,
Aimee

I would not touch innodb_spin_loops but rather take a look at SHOW INNODB STATUS when you’re doing load testing

14% CPU usage may mean two things - ether it is some lock contention or your workload requires a lot of disk IO

heavy disk IO easily explaines queries like this taking few seconds simply due to disk starvation.

More memory, faster disks or looking of schema can be optimized could be good idea in such case.

I have allocated 1.6G for innodb buffer pool and the innodb status output indicates the buffer pool hit reate is 100%, therefore, I believe the memory that’s allocated is sufficient (correct me if I am wrong)

============================================================ ==
Total memory allocated 1878242646; in additional pool allocated 9884672
Buffer pool size 102400
Free buffers 79286
Database pages 20660
Modified db pages 23
Pending reads 0
Pending writes: LRU 0, flush list 0, single page 0
Pages read 18419, created 2241, written 337383
0.00 reads/s, 0.00 creates/s, 0.00 writes/s
Buffer pool hit rate 1000 / 1000
============================================================ ==

I think it’s more likely to be a disk issue as we are using NFS :frowning: ( I got handled over this system which already has NFS configured and been told it’s not possible to change it at the moment).

However, I didn’t recall the iostat shows high disk utilisation – I will go back and re-collect the stats.

Thank you very much for your input.
Aimee

Right. You have only portion of buffer pool used, assuming you’ve warmed it up properly.

NFS is scary with Innodb it is frequent cause of data corruption if there as some minor incompatibilities.