"MySQL Query Responce Time" show "No datapoints"

  1. All graphs on “MySQL Query Responce Time” show “No datapoints” for all servers.
  1. These graphs are available only for Percona Server and MariaDB. It requires query response time plugin installed and query_response_time_stats variable turned on.

4. These graphs are available only for Percona Server and MariaDB. It requires query response time plugin installed and query_response_time_stats variable turned on.

BAD_SERVER:

[root@mysql ~]# mysql -e “SHOW PLUGINS” | grep QUERY
QUERY_RESPONSE_TIME ACTIVE INFORMATION SCHEMA query_response_time.so GPL
QUERY_RESPONSE_TIME_AUDIT ACTIVE AUDIT query_response_time.so GPL
QUERY_RESPONSE_TIME_READ ACTIVE INFORMATION SCHEMA query_response_time.so GPL
QUERY_RESPONSE_TIME_WRITE ACTIVE INFORMATION SCHEMA query_response_time.so GPL

root@mysql ~]# mysql -e “show variables like ‘%query_response_time_stats%’”
±--------------------------±------+
| Variable_name | Value |
±--------------------------±------+
| query_response_time_stats | ON |
±--------------------------±------+

can you share output for the following command?

mysql -e 'SELECT @@query_response_time_stats'

[root@mysql ~]# mysql -e ‘SELECT @@query_response_time_stats
±----------------------------+
| @@query_response_time_stats |
±----------------------------+
| 1 |
±----------------------------+

Sorry, but on good servers this setting was 0. After i enable it data start to showed. But on bad server this setting was 1, and All graphs on “MySQL Query Responce Time” show “No datapoints”.

can you share output of the following command for bad and good host?

curl https://pmm-client-ip:42002/metrics-mr --insecure | grep mysql_info_schema_read_query_response_time_seconds_sum

curl https://bad-server:42002/metrics-mr --insecure | grep mysql_info_schema_read_query_response_time_seconds_sum
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 117k 0 1058 0 0 99 0 0:20:11 0:00:10 0:20:01 241mysql_info_schema_read_query_response_time_seconds_sum 2.8901127170120003e+06
100 117k 100 117k 0 0 11269 0 0:00:10 0:00:10 --:–:-- 35050

curl https://good-server:42002/metrics-mr --insecure | grep mysql_info_schema_read_query_response_time_seconds_sum
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 4168k 0 1057 0 0 1057 0 1:07:18 --:–:-- 1:07:18 1057mysql_info_schema_read_query_response_time_seconds_sum 6846.318266
100 4168k 100 4168k 0 0 2509k 0 0:00:01 0:00:01 --:–:-- 2509k

Looks like /metrics-mr response time it is very long,
it is needed to understand why :frowning:

can you in one terminal run the following command

curl https://bad-server:42002/metrics-hr --insecure

and in second terminal try to catch ~10 seconds query, by running the following command several times in parallel to curl command?

mysql -e 'SHOW FULL PROCESSLIST;'

“curl https://bad-server:42002/metrics-hr --insecure” works to fast, I have no chance to do something in parallel. Here the output: http://depositfiles.com/files/xlmdl8525 But “curl https://bad-server:42002/metrics-mr --insecure” works slow. And I run “mysql -e ‘SHOW FULL PROCESSLIST;’” few times. Output: http://depositfiles.com/files/un9c4vg27

can you share PROCESSLIST?
http://depositfiles.com/files/un9c4vg27 - is metrics-mr content…

I cant, because of there is all time exist 1 and more processes that restore data in some DBs, so “mysql -e ‘SHOW FULL PROCESSLIST;’” is huge. May be i can give info in other form?

And more - i update pmm and percona server to latest versions(1.1.2 and 5.7.17-13). Nothing changed.
I can’t even chose bad server from “Host” dropdown list - it missed there

if we cannot catch long query, we can try to disable /metrics-mr collectors.

first of all, try to disable processlist collector on bad node

pmm-admin remove mysql:metrics
pmm-admin add mysql:metrics --disable-processlist

wait two minutes and refresh targets page

if bad-server/metrics-mr target still red,
it is needed to disable all /metrics-mr collectors, bad-server/metrics-mr target should become green, after that we can enable collectors one-by-one.

it is needed to disable all the following collectors in file on bad server.

collect.slave_status
collect.info_schema.processlist
collect.perf_schema.eventswaits
collect.perf_schema.file_events
collect.perf_schema.tablelocks
collect.info_schema.query_response_time
collect.engine_innodb_status

all params should exists in to false

after that restart service

systemctl daemon-reload
systemctl restart pmm-mysql-metrics-42002

wait two minutes, refresh targets page

if bad-server/metrics-mr target become green - try to enable collectors one-by-one (with restart and targets check after each of them)

Ok, i checked all of this settings:

Good settings(no red target):
collect.slave_status=true
collect.info_schema.processlist=true(false by default)
collect.info_schema.query_response_time=true
collect.engine_innodb_status=true(no such parameter in file by default)

Bad settings(metrics-mr down instance=“mysql.db” 1.07s ago context deadline exceeded), For now set to false:
collect.perf_schema.eventswaits
collect.perf_schema.file_events
collect.perf_schema.tablelocks(was false by default)

I see some graphs with problems start to work :slight_smile: Full info about what is good and what is bad i will give later

“MySQL Query Responce Time” show all data (resolved)
“MySQL General” -> “Process States” show data (resolved)
“MySQL Performance Schema” show “No datapoints”

Hi aleksey.filippov , you possibly don’t have the PERFORMANCE_SCHEMA enabled, you may check with:
$ sandboxes/multi_cmsb_5_7_17-mariadb-10_1_21-mariadb-10_2_4/node1/use -e “show global variables like ‘performance_schema’”
±-------------------±------+
| Variable_name | Value |
±-------------------±------+
| performance_schema | ON |
±-------------------±------+
If that reports OFF you need to set in your my.cnf and restart. Then you should start seeing datapoints. Let us know how you progress.

[root@mysql ~]# mysql -e “show variables like ‘%performance_schema%’”
±---------------------------------------------------------±------+
| Variable_name | Value |
±---------------------------------------------------------±------+
| performance_schema | ON |
| performance_schema_accounts_size | -1 |
| performance_schema_digests_size | 10000 |
| performance_schema_events_stages_history_long_size | 10000 |
| performance_schema_events_stages_history_size | 10 |
| performance_schema_events_statements_history_long_size | 10000 |
| performance_schema_events_statements_history_size | 10 |
| performance_schema_events_transactions_history_long_size | 10000 |
| performance_schema_events_transactions_history_size | 10 |
| performance_schema_events_waits_history_long_size | 10000 |
| performance_schema_events_waits_history_size | 10 |
| performance_schema_hosts_size | -1 |
| performance_schema_max_cond_classes | 80 |
| performance_schema_max_cond_instances | -1 |
| performance_schema_max_digest_length | 1024 |
| performance_schema_max_file_classes | 80 |
| performance_schema_max_file_handles | 32768 |
| performance_schema_max_file_instances | -1 |
| performance_schema_max_index_stat | -1 |
| performance_schema_max_memory_classes | 400 |
| performance_schema_max_metadata_locks | -1 |
| performance_schema_max_mutex_classes | 200 |
| performance_schema_max_mutex_instances | -1 |
| performance_schema_max_prepared_statements_instances | -1 |
| performance_schema_max_program_instances | -1 |
| performance_schema_max_rwlock_classes | 50 |
| performance_schema_max_rwlock_instances | -1 |
| performance_schema_max_socket_classes | 10 |
| performance_schema_max_socket_instances | -1 |
| performance_schema_max_sql_text_length | 1024 |
| performance_schema_max_stage_classes | 150 |
| performance_schema_max_statement_classes | 203 |
| performance_schema_max_statement_stack | 10 |
| performance_schema_max_table_handles | -1 |
| performance_schema_max_table_instances | -1 |
| performance_schema_max_table_lock_stat | -1 |
| performance_schema_max_thread_classes | 50 |
| performance_schema_max_thread_instances | -1 |
| performance_schema_session_connect_attrs_size | 512 |
| performance_schema_setup_actors_size | -1 |
| performance_schema_setup_objects_size | -1 |
| performance_schema_users_size | -1 |
±---------------------------------------------------------±------+

https://www.percona.com/forums/questions-discussions/percona-monitoring-and-management/47839-mysql-query-responce-time-show-no-datapoints?p=48145#post48145
You must read from the start - we check a lot of settings :slight_smile:

HA! My apologies aleksey.filippov , when I clicked through to your post it was the only one on the page so I assumed it was post #1, not… post #1 of page #2!!

First off I’m glad that the Response Time Distribution and Process States are again working. :thumbsup:

in terms of :

-collect.perf_schema.eventswaits=true
-collect.perf_schema.file_events=true
-collect.perf_schema.indexiowaits=true
-collect.perf_schema.tableiowaits=true
-collect.perf_schema.tablelocks=true

Can you please review file on your Bad server and verify these are set to true, and that you restart the service:

sudo systemctl restart pmm-mysql-metrics-42002.service

Let us know if this helps,