I had the understanding that , would not be part of the binlog produced by MySQL so I was wandering whether and how would the query digest support those attributes.
As a matter of fact, I just run an analysis on a 1GB binlog, and all the queries have a 1s query time distribution. Then I quite don’t understand the profile below, where I have an average execution time of 35ms with such a distribution :
# Query 1: 18.49 QPS, 0.02x concurrency, ID 0x46D36B88A206F8EE at byte 91426211
# This item is included in the report because it matches --limit.
# Scores: V/M = 0.98
# Time range: 2013-12-03 10:20:54 to 11:24:01
# Attribute pct total min max avg 95% stddev median
# ============ === ======= ======= ======= ======= ======= ======= =======
# Count 22 70024
# Exec time 26 85s 0 1s 1ms 0 35ms 0
# Query size 7 43.17M 612 672 646.51 652.75 13.41 621.67
# @@session.sq 0 0 0 0 0 0 0 0
# error code 0 0 0 0 0 0 0 0
# String:
# Databases xxxxx (23787/33%), yyyyy (20152/28%)... 7 more
# Query_time distribution
# 1us
# 10us
# 100us
# 1ms
# 10ms
# 100ms
# 1s ################################################## ##############
# 10s+
Finally, I have the feeling that we should use the slow query logs in priority, but then what threshold should we use ? 100ms, 10ms ? What about the very massive calls of ultrafast queries that would then not be seen ? What about the logging burden overload for the server ?
Thanks in advance for your help.