I am setting up a mysql server for a ~12GB database on an extra large ec2 instance. I am using 4 striped (with lvm) EBS volumes as the disks.
All the tables are innodb.
I’m starting to load test this setup and I’ve noticed that the occasional query is extremely slow. The queries that appear in the slow query log are all ones that create temporary tables. Watching lsof, I can see that the tables are being written to disk.
What’s weird, though, is that I have tmp_table_size and max_heap_table_size set to 1G. I’m 99% certain that none of these temp tables are even close to that size. So, why are the tables being written to disk?
Another weird fact here is that I’m not seeing any iowait watching top or iostat -x 1. Yet, some queries are still taking 14s.
My key_buffer is also set to 512M
I realize that the better approach here would be to rearchitect those queries. That’s the subject of another thread, though.
If you can’t easily correct any of these issues with your queries, you can try putting your temp directory on a tmpfs or ramfs partition. This will speed things up noticeably, but will certainly not be anywhere near as fast as preventing the on-disk temp table creation entirely.
You should also be able to get a rough estimate of the size of your temp tables by just checking your temp directory with ls. If the tables really are the problem, they’ll be around long enough for you to catch them before they get deleted. If they’re not, then they aren’t what’s slowing your query down (unless somehow you’ve written a query that needs to create a ton of temp tables, like some awful dependent subquery).
I’ve never used EBS, so I can’t tell you if its normal not see any I/O wait, but I doubt it would be. If none of your CPU cores are showing increased I/O wait or user time, I’d start to investigate possible locking issues.