Not the answer you need?
Register and ask your own question!

PMM server cannot collect metrics from a client

Hi.

I have Ubuntu 14.04.

I installed PMM server via Docker and running up to 10 client successfully anc get all metrics from Percona Server 5.6.

I also cannot collect metrics from one Percona XtraDB Cluster and Percona server after client installation.

I have telnet from the PMM server to clients and from clients to PMM server.

There are ouputs from pmm-admin:

pmm-admin list

pmm-admin 1.5.2

PMM Server | [az]
Client Name | [az]
Client Address | [az]
Service Manager | linux-upstart






SERVICE TYPE NAME LOCAL PORT RUNNING DATA SOURCE OPTIONS





mysql:queries [az] - YES root:***@unix(/var/run/mysqld/mysqld.sock) query_source=slowlog, query_examples=true
linux:metrics [az] 42000 YES -
mysql:metrics [az] 42002 YES root:***@unix(/var/run/mysqld/mysqld.sock)

pmm-admin check-network

PMM Network Status

Server Address | [server]
Client Address | [server]
  • System Time
    NTP Server (0.pool.ntp.org) | 2017-12-17 09:59:04 +0000 UTC
    PMM Server | 2017-12-17 09:59:03 +0000 GMT
    PMM Client | 2017-12-17 09:43:31 +0000 UTC
    PMM Server Time Drift | OK
    PMM Client Time Drift | 933s
    Time is out of sync. Please make sure the client time is correct to see the metrics.
    PMM Client to PMM Server Time Drift | 932s
    Time is out of sync. Please make sure the server time is correct to see the metrics.
  • Connection: Client --> Server

                                        • SERVER SERVICE STATUS

                                        • Consul API OK
                                          Prometheus API OK
                                          Query Analytics API OK
Connection duration | 271.3088ms
Request duration | 276.327ms
Full round trip | 547.6358ms
  • Connection: Client <-- Server





                            • SERVICE TYPE NAME REMOTE ENDPOINT STATUS HTTPS/TLS PASSWORD





                            • linux:metrics [server][server]:42000 DOWN YES -
                              mysql:metrics [server][server]:42002 DOWN YES -
When an endpoint is down it may indicate that the corresponding service is stopped (run 'pmm-admin list' to verify).
If it's running, check out the logs /var/log/pmm-*.log

When all endpoints are down but 'pmm-admin list' shows they are up and no errors in the logs,
check the firewall settings whether this system allows incoming connections from server to address:port in question.

Also you can check the endpoint status by the URL: [http://[server]/prometheus/targets]

Logs

time="2017-12-17T09:23:20Z" level=info msg="Starting node_exporter (version=0.14.0+percona.2, branch=master, revision=8ea8a4521f8f42d581847ee3d271dbb2a1fe8146)" source="node_exporter.go:142"
time="2017-12-17T09:23:20Z" level=info msg="Build context (go=go1.9.2, [email protected], date=20171130-13:11:09)" source="node_exporter.go:143"
time="2017-12-17T09:23:20Z" level=info msg="Enabled collectors:" source="node_exporter.go:162"
time="2017-12-17T09:23:20Z" level=info msg=" - meminfo" source="node_exporter.go:164"
time="2017-12-17T09:23:20Z" level=info msg=" - netdev" source="node_exporter.go:164"
time="2017-12-17T09:23:20Z" level=info msg=" - netstat" source="node_exporter.go:164"
time="2017-12-17T09:23:20Z" level=info msg=" - time" source="node_exporter.go:164"
time="2017-12-17T09:23:20Z" level=info msg=" - loadavg" source="node_exporter.go:164"
time="2017-12-17T09:23:20Z" level=info msg=" - filefd" source="node_exporter.go:164"
time="2017-12-17T09:23:20Z" level=info msg=" - filesystem" source="node_exporter.go:164"
time="2017-12-17T09:23:20Z" level=info msg=" - stat" source="node_exporter.go:164"
time="2017-12-17T09:23:20Z" level=info msg=" - uname" source="node_exporter.go:164"
time="2017-12-17T09:23:20Z" level=info msg=" - vmstat" source="node_exporter.go:164"
time="2017-12-17T09:23:20Z" level=info msg=" - diskstats" source="node_exporter.go:164"
time="2017-12-17T09:23:20Z" level=info msg="Starting HTTPS server of 10.53.7.20:42000 ..." source="server.go:106"
2017/12/17 09:25:29 http: TLS handshake error from 10.53.7.20:60804: tls: first record does not look like a TLS handshake
2017/12/17 09:43:32 http: TLS handshake error from 10.53.7.20:60840: tls: first record does not look like a TLS handshake
time="2017-12-17T09:23:22Z" level=info msg="Starting mysqld_exporter (version=1.5.2, branch=master, revision=c5b2f15a2b2b46eb53192c6aded039c90f406733)" source="mysqld_exporter.go:798"
time="2017-12-17T09:23:22Z" level=info msg="Build context (go=go1.9.2, user=, date=)" source="mysqld_exporter.go:799"
time="2017-12-17T09:23:22Z" level=info msg="HTTPS/TLS is enabled" source="mysqld_exporter.go:843"
time="2017-12-17T09:23:22Z" level=info msg="Listening on 10.53.7.20:42002" source="mysqld_exporter.go:846"
2017/12/17 09:25:29 http: TLS handshake error from 10.53.7.20:47937: tls: first record does not look like a TLS handshake
2017/12/17 09:30:03 http: TLS handshake error from 10.53.7.20:47943: EOF
2017/12/17 09:43:32 http: TLS handshake error from 10.53.7.20:47973: tls: first record does not look like a TLS handshake
    1. Version: percona-qan-agent 1.5.2*
    2. Basedir: /usr/local/percona/qan-agent*
    3. PID: 63403*
    4. API: 10.51.7.23/qan-api*
    5. UUID: f8b058d3e1c5470063e05fb7155207c1*
      2017/12/17 09:23:27.833911 main.go:163: Starting agent...
      2017/12/17 09:23:27.834629 main.go:331: Agent is ready
      2017/12/17 09:23:28.661498 main.go:204: API is ready

Comments

Sign In or Register to comment.

MySQL, InnoDB, MariaDB and MongoDB are trademarks of their respective owners.
Copyright ©2005 - 2020 Percona LLC. All rights reserved.