Not the answer you need?
Register and ask your own question!

Query time huge difference

oferbecoferbec EntrantCurrent User Role Beginner
Hi,

I'm using version 5.6.23.

The query I'm running:

SELECT stationeve0_.id AS col_0_0_, stationeve0_.notify_on AS col_1_0_, stationeve0_.event_date AS col_2_0_, stationeve0_.source_id AS col_3_0_, stationeve0_.event_type AS col_4_0_, stationeve0_.log_level_id AS col_5_0_, stationeve0_.action_result AS col_6_0_, stationeve0_.relevent_data AS col_7_0_, stationeve0_.extra_data AS col_8_0_, stationeve0_.transaction_id AS col_9_0_, stationimp1_.caption AS col_10_0_, stationimp1_.identity_key AS col_11_0_, stationimp1_.id AS col_12_0_, stationeve0_.user_id AS col_13_0_, stationeve0_.request_uuid AS col_14_0_
FROM station_event_log stationeve0_
INNER JOIN station stationimp1_ ON stationeve0_.station_Id=stationimp1_.id
WHERE (stationimp1_.deleted IS NULL OR stationimp1_.deleted=0) AND stationeve0_.service_provider_id=3 AND stationeve0_.notify_on>='2015-04-25 13:17:27' AND stationeve0_.notify_on<='2015-04-26 13:17:27.593'
ORDER BY stationeve0_.notify_on DESC, stationeve0_.id DESC
LIMIT 50

This query is taking a few milliseconds.
When I'm changing the dates in the where clause and ask for 48 hours instead of 24 it still takes a few milliseconds. When it is 72 hours I have to manually kill the query after 2-3 minutes because the query didn't finish yet. I modified the ranges until I found a limit of which 1 minute change is the difference between lightning fast results and no results at all. The range limit is dynamic, if yesterday it was 50 hours and 20 minutes (was fast, in 50 hours and 21 minutes it was very slow), today it can be 47 hours and 32 minutes for example.
I can choose a completely different time range (in the beginning of April or in March for example) and I still see the exact same behavior.
I'm really clueless at this point, can anyone point me in the right direction and help me understand it and solve it?

The create table of the relevant table:

CREATE TABLE `station_event_log` (
`id` BIGINT(20) NOT NULL AUTO_INCREMENT,
`source_id` INT(11) NOT NULL,
`event_type` BIGINT(11) NOT NULL,
`notify_on` DATETIME(3) NOT NULL,
`extra_data` VARCHAR(3000) NULL DEFAULT NULL,
`station_id` BIGINT(20) NULL DEFAULT NULL,
`log_level_id` INT(11) NULL DEFAULT NULL,
`relevent_data` VARCHAR(3000) NULL DEFAULT NULL,
`root_action_id` VARCHAR(255) NULL DEFAULT NULL,
`action_result` VARCHAR(1000) NULL DEFAULT NULL,
`transaction_id` BIGINT(20) NULL DEFAULT NULL,
`error_code_id` INT(11) NULL DEFAULT NULL,
`user_id` BIGINT(20) NULL DEFAULT NULL,
`station_socket_id` BIGINT(20) NULL DEFAULT NULL,
`service_provider_id` BIGINT(20) NULL DEFAULT NULL,
`event_date` DATETIME(3) NOT NULL,
`request_uuid` VARCHAR(250) NULL DEFAULT NULL,
PRIMARY KEY (`id`, `notify_on`),
INDEX `idx_station_event_log_1` (`notify_on`),
INDEX `idx_station_event_log_2` (`event_type`),
INDEX `fk_station_event_log_station_socket_id` (`station_socket_id`),
INDEX `idx_station_event_log_3` (`log_level_id`),
INDEX `idx_station_event_log_complex_1` (`station_id`, `notify_on`, `service_provider_id`, `event_type`)
)
COLLATE='utf8_general_ci';

In addition, I have partitions by year on `notify_on` and sub-partitions by month (on the same column).

Comments

  • wagnerbianchiwagnerbianchi Remote DBA Current User Role Patron
    Can you share the EXPLAIN and profiler of this query? Additionally, can you run pt-duplicate-keys or even, review the table's structure in order to check duplicate indexes or unnecessary columns around the index definition on CREATE TABLE?
Sign In or Register to comment.

MySQL, InnoDB, MariaDB and MongoDB are trademarks of their respective owners.
Copyright ©2005 - 2020 Percona LLC. All rights reserved.