Got a crazy situation where the dev isn’t available for almost two weeks.
Percona MySQL Version 5.5.31-30.3-log x86_64
Some application code (website) is create a query that is basically a count of the items in a table, however they used an object model so the actual tables mean there is a crazy join to build a virtual table they need to create a count. Fine in unit test maybe but I don’t think they tested this on volume.
I’ve messed about with all sort of memory and temp settings but to no avail, pretty soon every time the database will have 200+ sessions all running the same query.
Run on an empty server it takes about 30 secs… but obviously once you get 10+ sessions running it grinds to a stop (almost). The result row count is nearly always <300.
Tried pt-kill and match etc for queries over 180 secs but got to be careful as the same query seems top run for others as well and the site is using php and pconnect so can be counter productive.
I would have thought all the data should be in memory but it still runs slow.
Dual quad core box with HT so appears like 16 core but the box is fine and when unloaded looks like this.
top - 00:20:28 up 170 days, 8:14, 1 user, load average: 0.93, 0.64, 0.57
Tasks: 284 total, 1 running, 282 sleeping, 0 stopped, 1 zombie
Mem: 16458848k total, 10149508k used, 6309340k free, 812872k buffers
Swap: 3997688k total, 61024k used, 3936664k free, 3232364k cached
Anyone got an idea how to configure if I have say 200 sessions all running this crazy count until we can get the dev to take it out?
The query basically does row count on a temp table created by
a SELECT from tablea
then JOINs tableb
(4 times by the way)
then JOINs tablec
then JOINs tabled
then JOINs tablee
then JOINs tablef
then JOINs tableg
then jOINs tableh
then defines the WHERE
then GROUPs them
then ORDERs them
Some stats on those tables (row counts rounded):
Table name Rows Size
tablea 22000 13M
tableb 750000 100M (JOINed 4 times)
tablec 25000 12M
tabled 100000 100M
tablee 25000 10M
tablef 80000 20M
tableg 350 125K
tableh 90000 20M
All tables innodb, there are indexes (all innodb on all the joins columns and explain is using them all).
Some variables:
thread_cache_size = 8
Current threads_cached = 3
Current threads_per_sec = 0
max_connections = 500
Current threads_connected = 4
Historic max_used_connections = 44
Current InnoDB index space = 623 M
Current InnoDB data space = 682 M
Current InnoDB buffer pool free = 81 %
innodb_buffer_pool_size = 6.00 G
Current MyISAM index space = 5 M
key_buffer_size = 16 M
Key cache miss rate is 1 : 774
Key buffer free ratio = 72 %
Query cache is enabled
query_cache_size = 512 M
query_cache_used = 505 M
query_cache_limit = 24 M
Query cache Memory fill ratio = 98.70 %
query_cache_min_res_unit = 1 K
sort_buffer_size = 2 M
read_rnd_buffer_size = 256 K
join_buffer_size = 64.00 M
table_open_cache = 6000 tables
table_definition_cache = 40000 tables
There are 440 tables
There are 980 open tables.
Current max_heap_table_size = 512 M
Current tmp_table_size = 512 M
Of 349403 temp tables, 33% were created on disk