PMM MySql 8.0.11 Instances Not Showing Graphs

I have a mixed environment of MySql 5.x and 8.0.11 Instances.

The 5.x Instances are Enterprise and the 8.0.11 are Community.

I have no issues in monitoring the MySql 5.x with PMM.

The 8.0.11 instances do not gather all the data and the MySql Overview graphs are not populated (see attachment)

How do I investigate further and correct.

Diagnostics

[baasp@lcormysqlp01 ~]$ sudo pmm-admin check-network –-no-emoji
PMM Network Status

Server Address | SERVERIP
Client Address | CLIENTIP

  • System Time
    NTP Server (0.pool.ntp.org) | unable to get ntp time: %!s()
    PMM Server | 2018-12-18 14:38:14 +0000 GMT
    PMM Client | 2018-12-18 09:43:08 -0500 EST
    PMM Client to PMM Server Time Drift | 294s
    Time is out of sync. Please make sure the server time is correct to see the metrics.

  • Connection: Client → Server


SERVER SERVICE STATUS


Consul API OK
Prometheus API OK
Query Analytics API OK

Connection duration | 1.350818ms
Request duration | -544.205µs
Full round trip | 806.613µs

  • Connection: Client ← Server

SERVICE TYPE NAME REMOTE ENDPOINT STATUS HTTPS/TLS PASSWORD


linux:metrics linux_lcormysqlp01 CLIENTIP:42000 OK YES YES
mysql:metrics mysql_lcormysqlp01 CLIENTIP:42002 DOWN YES YES

When an endpoint is down it may indicate that the corresponding service is stopped (run ‘pmm-admin list’ to verify).
If it’s running, check out the logs /var/log/pmm-*.log

When all endpoints are down but ‘pmm-admin list’ shows they are up and no errors in the logs,
check the firewall settings whether this system allows incoming connections from server to address:port in question.

Also you can check the endpoint status by the URL: [url]http://SERVERIP/prometheus/targets[/url]

ENDPOINT STATUS - metrics-hr is the issue

mysql (50/72 up)
Endpoint State Labels Last Scrape Error

[url]https://CLIENTIP:42002/metrics-lr[/url]
UP instance=“mysql_lcormysqlp01” 30.96s ago
[url]https://CLIENTIP:42002/metrics-hr[/url]
DOWN instance=“mysql_lcormysqlp01” 139ms ago no token found
[url]https://CLIENTIP:42002/metrics-mr[/url]
UP instance=“mysql_lcormysqlp01” 45ms ago

Note that I have seen runs of sudo pmm-admin check-network –-no-emoji where the remote endpoint is OK for both linux:metrics and mysql:metrics.

It makes no difference for the PMM graphs.

Hi,

I just setup two new MySql 5.6 servers

Bot show

  • Connection: Client ← Server

SERVICE TYPE NAME REMOTE ENDPOINT STATUS HTTPS/TLS PASSWORD


linux:metrics linux_NAME01 ###.###.###.###:42000 DOWN YES YES
mysql:metrics mysql_NAME01 ###.###.###.###:42002 DOWN YES YES

Yet both populate the graphs in PMM correctly.

The issue appears to be MySql 8.0.11 specific…

Could someone advise what to look into please?

  • System Time
    NTP Server (0.pool.ntp.org) | unable to get ntp time: %!s()
    PMM Server | 2018-12-18 14:38:14 +0000 GMT
    PMM Client | 2018-12-18 09:43:08 -0500 EST
    PMM Client to PMM Server Time Drift | 294s
    Time is out of sync. Please make sure the server time is correct to see the metrics.

This can be related to some problems. Pls check ntp on your Client and Server

Roma,

Thanks for the suggestion. It was badly out of sync and is now correct.

Unfortunately this did not correct the problem.

The issue is specific to MySql 8

Only the MySql graphs do not populate. Specifically if you wish to focus on one “MySql Client Thread Activity” does not populate.

However I see two the do. “Process States” and “Top Process States Hourly”.

What is different between MySql 5.6 and 8.x that would lead to this issue?

Thanks,

Peter

This is current status

[baasp@server ~]$ sudo pmm-admin check-network –-no-emoji
PMM Network Status

Server Address | XXX.XXX.XXX.XXX
Client Address | XXX.XXX.XXX.XXX

  • System Time
    NTP Server (0.pool.ntp.org) | unable to get ntp time: %!s()
    PMM Server | 2019-01-08 18:28:09 +0000 GMT
    PMM Client | 2019-01-08 13:28:21 -0500 EST
    PMM Client to PMM Server Time Drift | OK

  • Connection: Client → Server


SERVER SERVICE STATUS


Consul API OK
Prometheus API OK
Query Analytics API OK

Connection duration | 2.294508ms
Request duration | -1.583215ms
Full round trip | 711.293µs

  • Connection: Client ← Server

SERVICE TYPE NAME REMOTE ENDPOINT STATUS HTTPS/TLS PASSWORD


linux:metrics mysql80_lcormysqlp01 XXX.XXX.XXX.XXX:42000 OK YES YES
mysql:metrics mysql80_lcormysqlp01 XXX.XXX.XXX.XXX:42002 DOWN YES YES

When an endpoint is down it may indicate that the corresponding service is stopped (run ‘pmm-admin list’ to verify).
If it’s running, check out the logs /var/log/pmm-*.log

When all endpoints are down but ‘pmm-admin list’ shows they are up and no errors in the logs,
check the firewall settings whether this system allows incoming connections from server to address:port in question.

Also you can check the endpoint status by the URL: [url]http://XXX.XXX.XXX.XXX/prometheus/targets[/url]

I have attached the current status from prometheus as well.

Does anyone have suggestions on how to diagnose and correct the issue please?

Using [url]http://www.dataarchitect.cloud/troubleshooting-percona-monitoring-and-management-pmm-metrics/[/url]

I have been able to see that it is the High Resolution MySQL data that is not being gathered.

I need to get a port opened before I can see the data stream directly in a browser now.

In addition the client is on 1.16.0 and the server is running 1.17.0. This too is being addressed. But I don’t believe it is a factor as the issue has persisted for months.

Client now running 1.17.0 and issue persists

On start of mysqld_exporter we see:

time=“2019-01-14T15:28:22-05:00” level=info msg=“Starting mysqld_exporter (version=, branch=, revision=)” source=“mysqld_exporter.go:331”
time=“2019-01-14T15:28:22-05:00” level=info msg=“Build context (go=go1.10.1, user=, date=)” source=“mysqld_exporter.go:332”
time=“2019-01-14T15:28:22-05:00” level=info msg=“HTTP basic authentication is enabled” source=“mysqld_exporter.go:386”
time=“2019-01-14T15:28:22-05:00” level=info msg=“HTTPS/TLS is enabled” source=“mysqld_exporter.go:401”
time=“2019-01-14T15:28:22-05:00” level=info msg=“Enabled High Resolution scrapers:” source=“mysqld_exporter.go:415”
time=“2019-01-14T15:28:22-05:00” level=info msg=" --collect.info_schema.innodb_metrics" source=“mysqld_exporter.go:417”
time=“2019-01-14T15:28:22-05:00” level=info msg=" --collect.global_status" source=“mysqld_exporter.go:417”
time=“2019-01-14T15:28:22-05:00” level=info msg=“Enabled Medium Resolution scrapers:” source=“mysqld_exporter.go:421”
time=“2019-01-14T15:28:22-05:00” level=info msg=" --collect.info_schema.processlist" source=“mysqld_exporter.go:423”
time=“2019-01-14T15:28:22-05:00” level=info msg=" --collect.slave_status" source=“mysqld_exporter.go:423”
time=“2019-01-14T15:28:22-05:00” level=info msg=" --collect.perf_schema.eventswaits" source=“mysqld_exporter.go:423”
time=“2019-01-14T15:28:22-05:00” level=info msg=" --collect.perf_schema.file_events" source=“mysqld_exporter.go:423”
time=“2019-01-14T15:28:22-05:00” level=info msg=" --collect.info_schema.query_response_time" source=“mysqld_exporter.go:423”
time=“2019-01-14T15:28:22-05:00” level=info msg=" --collect.info_schema.innodb_cmp" source=“mysqld_exporter.go:423”
time=“2019-01-14T15:28:22-05:00” level=info msg=" --collect.info_schema.innodb_cmpmem" source=“mysqld_exporter.go:423”
time=“2019-01-14T15:28:22-05:00” level=info msg=“Enabled Low Resolution scrapers:” source=“mysqld_exporter.go:427”
time=“2019-01-14T15:28:22-05:00” level=info msg=" --collect.global_variables" source=“mysqld_exporter.go:429”
time=“2019-01-14T15:28:22-05:00” level=info msg=" --collect.binlog_size" source=“mysqld_exporter.go:429”
time=“2019-01-14T15:28:22-05:00” level=info msg=" --collect.info_schema.userstats" source=“mysqld_exporter.go:429”
time=“2019-01-14T15:28:22-05:00” level=info msg=" --collect.custom_query" source=“mysqld_exporter.go:429”
time=“2019-01-14T15:28:22-05:00” level=info msg=“Listening on 192.168.72.192:42002” source=“mysqld_exporter.go:438”
time=“2019-01-14T14:37:23-05:00” level=info msg=“Starting mysqld_exporter (version=, branch=, revision=)” source=“mysqld_exporter.go:331”
time=“2019-01-14T14:37:23-05:00” level=info msg=“Build context (go=go1.10.1, user=, date=)” source=“mysqld_exporter.go:332”
time=“2019-01-14T14:37:23-05:00” level=info msg=“HTTP basic authentication is enabled” source=“mysqld_exporter.go:386”
time=“2019-01-14T14:37:23-05:00” level=info msg=“HTTPS/TLS is enabled” source=“mysqld_exporter.go:401”
time=“2019-01-14T14:37:23-05:00” level=info msg=“Enabled High Resolution scrapers:” source=“mysqld_exporter.go:415”
time=“2019-01-14T14:37:23-05:00” level=info msg=" --collect.info_schema.innodb_metrics" source=“mysqld_exporter.go:417”
time=“2019-01-14T14:37:23-05:00” level=info msg=" --collect.global_status" source=“mysqld_exporter.go:417”
time=“2019-01-14T14:37:23-05:00” level=info msg=“Enabled Medium Resolution scrapers:” source=“mysqld_exporter.go:421”
time=“2019-01-14T14:37:23-05:00” level=info msg=" --collect.slave_status" source=“mysqld_exporter.go:423”
time=“2019-01-14T14:37:23-05:00” level=info msg=" --collect.info_schema.processlist" source=“mysqld_exporter.go:423”
time=“2019-01-14T14:37:23-05:00” level=info msg=" --collect.perf_schema.eventswaits" source=“mysqld_exporter.go:423”
time=“2019-01-14T14:37:23-05:00” level=info msg=" --collect.perf_schema.file_events" source=“mysqld_exporter.go:423”
time=“2019-01-14T14:37:23-05:00” level=info msg=" --collect.info_schema.query_response_time" source=“mysqld_exporter.go:423”
time=“2019-01-14T14:37:23-05:00” level=info msg=" --collect.info_schema.innodb_cmp" source=“mysqld_exporter.go:423”
time=“2019-01-14T14:37:23-05:00” level=info msg=" --collect.info_schema.innodb_cmpmem" source=“mysqld_exporter.go:423”
time=“2019-01-14T14:37:23-05:00” level=info msg=“Enabled Low Resolution scrapers:” source=“mysqld_exporter.go:427”
time=“2019-01-14T14:37:23-05:00” level=info msg=" --collect.global_variables" source=“mysqld_exporter.go:429”
time=“2019-01-14T14:37:23-05:00” level=info msg=" --collect.binlog_size" source=“mysqld_exporter.go:429”
time=“2019-01-14T14:37:23-05:00” level=info msg=" --collect.info_schema.userstats" source=“mysqld_exporter.go:429”
time=“2019-01-14T14:37:23-05:00” level=info msg=" --collect.custom_query" source=“mysqld_exporter.go:429”
time=“2019-01-14T14:37:23-05:00” level=info msg=“Listening on 192.168.72.192:42002” source=“mysqld_exporter.go:438”

Yet Prometheus shows “no token found” for metrics-hr

The current results of :

sudo pmm-admin check-network –-no-emoji

Are all GREEN (see attached).

Yet I still do not have any data for Prometheus it still shows “no token found” for metrics-hr

And hence the detailed graphs for MySql are blank.

Hi,

If you go to [url]https://192.168.72.192:42002/metrics-hr[/url] with browser are you getting page with metrics or are you seeing something else

Per:

[url]https://github.com/prometheus/prometheus/issues/3154[/url]

Such behavior is possible if there is some malformed metric name. Might be in your installation such metric is reported for some reason ?

Peter, Thanks for the reply.

I am just waiting on our internal team to open the ports so I may see [URL]https://192.168.72.192:42002/metrics-hr[/URL] from my PC.

May take a few days:( I will report the results once I have them.

But we are advancing and that is all that matters.

Peter

Hi,

There is an easier way :slight_smile: You can log in to your server and use curl to get the output:

curl -k [url]https://192.168.72.192:42002/metrics-hr[/url]

Attached is output of curl -k -u admin:PASSWORDSECRET [url]https://192.168.72.192:42002/metrics-hr[/url]

There is some data returned with dash - but would it matter ?

TYPE go_gc_duration_seconds summary

go_gc_duration_seconds{quantile=“0”} 2.6608e-05
go_gc_duration_seconds{quantile=“0.25”} 4.1159e-05
go_gc_duration_seconds{quantile=“0.5”} 5.0552e-05
go_gc_duration_seconds{quantile=“0.75”} 7.095e-05
.
.
.

HELP mysql_info_schema_innodb_metrics_adaptive_hash_index_adaptive_hash_searches_btree_total Number of searches using B-tree on an index search

.
.
.

HELP mysql_info_schema_innodb_metrics_buffer_buffer_pool_read_ahead_evicted_total Read-ahead pages evicted without being accessed (innodb_buffer_pool_read_ahead_evicted)

It would appear to be gathering the data just not processing it:(

Thanks in advance for reviewing and advising on this issue.

Peter

curl_output_metric_hr.txt (72.8 KB)

baasp cam you also check [url]Percona Monitoring and Management and especially [url]Percona Monitoring and Management section. Maybe we’ll found some ideas in the logs

Have any ideas based on the output been sparked?

Hi has the data provided sparked an insight into correcting the issue?

Sorry for the spam, my bad I had not noted the thread had gon over to a second page.

Attached is the logs.zip from the server.

pmm-server_2019-01-22-13-22.zip (110 KB)

Output of check-network

[me@lcormysqld01 ~]$ sudo pmm-admin check-network –-no-emoji
PMM Network Status

Server Address | SERVER_IP
Client Address | CLIENT_IP

  • System Time
    NTP Server (0.pool.ntp.org) | unable to get ntp time: %!s()
    PMM Server | 2019-01-22 13:24:48 +0000 GMT
    PMM Client | 2019-01-22 08:26:25 -0500 EST
    PMM Client to PMM Server Time Drift | 97s
    Time is out of sync. Please make sure the server time is correct to see the metrics.

  • Connection: Client → Server


SERVER SERVICE STATUS


Consul API OK
Prometheus API OK
Query Analytics API OK

Connection duration | 1.771548ms
Request duration | -615.793µs
Full round trip | 1.155755ms

  • Connection: Client ← Server

SERVICE TYPE NAME REMOTE ENDPOINT STATUS HTTPS/TLS PASSWORD


linux:metrics mysql80_lcormysqld01 CLIENT_IP:42000 OK YES YES
mysql:metrics mysql80_lcormysqld01 CLIENT_IP:42002 DOWN YES YES

When an endpoint is down it may indicate that the corresponding service is stopped (run ‘pmm-admin list’ to verify).
If it’s running, check out the logs /var/log/pmm-*.log

When all endpoints are down but ‘pmm-admin list’ shows they are up and no errors in the logs,
check the firewall settings whether this system allows incoming connections from server to address:port in question.

Also you can check the endpoint status by the URL: http://SERVER_IP/prometheus/targets