Is 15 to 30 seconds acceptable for this query/setup?

Specifically, I have the following (anonymized) query showing up in my Slow Query log:

Query_time: 41 Lock_time: 0 Rows_sent: 1 Rows_examined: 3421238 SELECT count(*) FROM Table_A LEFT JOIN Table_B ON Table_A.B_id=Table_B.id LEFT JOIN Table_C ON Table_A.C_id=Table_C.id WHERE Table_A.deleted=0;

If it’s not obvious, this is an automatically generated query behind a page that generates a paginated result set, while showing something like “Displaying 1 - 20 of 40,678”.

The program’s method of getting that count is to strip out the field selection and replace it with the count(*), but leaving the WHERE portion in tact.

When the result set is filtered, say, by Date Range or other parameters, the query executes at an (almost) acceptable speed, because of proper indexing, etc.

But when users clear search terms, or first access the given page (with no search filters set), it simply paginates all results that aren’t (soft-) deleted.

Here’s some facts:

  • Table_A is 495MB data, 160MB indexes, ~ 1,093,967 rows.
  • Table_B is 388kB data, 14kB indexes, ~ 88 rows.
  • Table_C is 130MB data, 11MB indexes, ~ 84,481 rows.

The EXPLAIN result for this query is:

±—±------------±------------------±-------±---------------±---------------±--------±-------------------------------------------------±--------±------------+| id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra |±—±------------±------------------±-------±---------------±---------------±--------±-------------------------------------------------±--------±------------+| 1 | SIMPLE | Table_A | ref | idx_a | idx_a | 1 | const | 1093964 | || 1 | SIMPLE | Table_B | eq_ref | PRIMARY | PRIMARY | 36 | db.Table_A.B_id | 1 | Using index || 1 | SIMPLE | Table_C | eq_ref | PRIMARY | PRIMARY | 8 | db.Table_A.C_id | 1 | Using index |±—±------------±------------------±-------±---------------±---------------±--------±-------------------------------------------------±--------±------------+3 rows in set (0.00 sec)

You can see that “deleted” is indexed, and is the only index usable by that particular query.

The Machine:
… is a Dell Power Edge 1950 with 2 quad core Xeon 2.0GHz, and 8 GB of memory. It does practically nothing but run the MySQL server.

The Question:
Is it reasonable for this query to take ~30 seconds to execute (because of the sheer amount of data maybe)? ~15?
Is that unknowable from the information provided?

Our next step is to call in a “highly paid consultant”, because I’m out of answers/research, and I fall just short of knowing from experience if this is actually acceptable when dealing with a million+ rows of data and two left joins.

You should not run this query in the first place.

http://www.mysqlperformanceblog.com/2008/09/24/four-ways-to- optimize-paginated-displays/

Joining on shorter keys would also help.

You could pre-store the count in a separate table and update it when you alter table A, B or C.

If you really want to run this query, make sure all data fits in memory and/or you extend the index on deleted to include the columns B_id and C_id as a suffix.