PMM2 linux metrics only

roma.novikov can you tell me how I can send you a private message? It seems that this function is not available for me.

magenbrotIt looks like a region was specified for only one of your node (tyr.ovtec.it). The latest grafana version hasn’t support empty values. So only nodes with regions are shown. Such behavior will be changed in next PMM releases. For now you should remove and add the node without a region or specify a region for other nodes by the same procedure.

Hi adivinho! Thank you for your response. Can you tell me how I can add hosts without a region?

Hi magenbrot , Please use the next command to remove/add your node tyr.ovtec.it

pmm-admin config --force tyr.ovtec.it

e.g.

# pmm-admin config --force ip-10-178-1-195
Checking local pmm-agent status...
pmm-agent is running.
Registering pmm-agent on PMM Server...
Registered.
Configuration file /usr/local/percona/pmm2/config/pmm-agent.yaml updated.
Reloading pmm-agent configuration...
Configuration reloaded.
Checking local pmm-agent status...
pmm-agent is running.
#

Can you make sure that port 42001 is open on the host being monitored? This worked for me.

Hi adivinho, I’ve done that for all my hosts, but I still can only see one host. I’ve even reset the pmm-server and pmm-data container to try it with a fresh setup.

I’ve used the following commands to start the container and to configure the clients (Ports 80 and 443 are not available on the pmm-server host, I’m using a reverse proxy to access the grafana GUI).

pmm-server:


docker create -v /srv --name pmm-data percona/pmm-server:2 /bin/true
docker run -d -p 10.20.30.2:8190:80 -p 10.20.30.2:8191:443 --volumes-from pmm-data --name pmm-server --restart always percona/pmm-server:2

pmm-client:


pmm-admin config --server-insecure-tls --server-url=https://admin:xxx@10.20.30.2:8191 --force tyr.ovtec.it

Thx, but I’ve no firewall on the internal interfaces which I use for pmm. I also had tcpdump running for about an hour on this port, but there where no packets.

Hi!

I finally found the problem with my setup. All my hosts except tyr.ovtec.it have two network interfaces, one with a public ip address and another one for the internal network. So the “usual” pmm-admin config added the hosts with their public IP address, but since this ip is firewalled the hosts did not appear in PMM.

I now added them with this command and everything is working fine:
pmm-admin config --server-insecure-tls --server-url=https://admin:password@10.20.30.2:8191 --force 10.20.30.6 generic loki.mydomain.de

in this example 10.20.30.2 is the host where the pmm is running in a docker container and 10.20.30.6 is the internal IP address of the host which I want to monitor.

Thanks again for all your help!

regards
Oli

Thank you for updating this post, it could help others in the future and we appreciate the share.

I seem that I have a similar problem (pmm2).

I don’t see any nginx metrics.

Hi @mosad,

Have you added any service for monitoring or configured pmm-agent on a node only?

pmm-agent only

It could be a firewall issue.

You may switch metrics flow mode to push in the latest PMM version 2.12.0

At first you should remove already added node on the inventory page.

Then please run configuration command again but with flag –metrics-mode=push

error: unknown long flag ‘–metrics-mode’

it is not a firewall issue

I added an external service (nginx.service)

but maybe I don’t understand pmm2 and make something wrong

Could you please help me with this issue?

Here are examples of configuration commands.

You should use one of them.

Please make sure that it’s shown metrics mode “push” in pmm-admin list output.

I used --metrics-mode=push

but still

Screenshot 2020-12-14 at 16.20.09.png

why?

Could you provide an output of the next command?

Please replace 172.17.0.2 by your PMM server IP.

curl -k https://admin:admin@172.17.0.2/v1/version

Screenshot 2020-12-14 at 18.30.18.png

push mode was not introduced until v 2.12.0 so you will need to upgrade both client and server to take advantage. Since you are using AMI, easiest way to upgrade is using the panel on the main dashboard and allow the packages to update (Amazon marketplace takes a while to approve new AMI’s so they always lag what we have on our download repo).

Another place to check what issues you’re having on the current version is navigate your browser to https://:/prometheus/targets

in <=2.11.1 it’s going to be an HTML output that will show failures from the standpoint of the PMM server, in 2.12.0 it’s text based but you’ll still be able to see errors for any “down” state exporters like:

state=down, endpoint=http://centos.local:42000/metrics?collect%5B%5D=...,
labels={agent_id="/agent_id/59bd4c5e-cfa8-429b-afe0-fc39d6cef1e1",agent_type="mysqld_exporter",instance="/agent_id/59bd4c5e-cfa8-429b-afe0-fc39d6cef1e1",
job="mysqld_exporter_agent_id_59bd4c5e-cfa8-429b-afe0-fc39d6cef1e1_mr-5s",machine_id="/machine_id/e52213f019f94f929ee4895dc9ec97b2",node_id="/node_id/321a3d71-9bb6-4f7e-91bc-cf710f26d322",
node_name="centos",node_type="generic",service_id="/service_id/64639b5d-3481-47bf-9718-e1db62c820ca",service_name="PS5.7",
service_type="mysql"}, last_scrape=4.319s ago, scrape_duration=4.001s, 
error="error when scraping \"http://centos.local:42000/metrics?collect%5B%5D...\" with timeout 4s: timeout"
job="node_exporter_agent_id_9fc926a3-47b3-406c-8c83-3c3c54feb180_hr-1s" (0/1 up)

In that example the client is down but firewalls or unroutable client IPs (from the server perspective) could give the same output. You will likely see some sort of “Context deadline exceeded” (aka: timeout). if you want to post a sample of the error we’ll see what additional help we can provide.

ok, it’s helpful info, but I have another problem.

Percona image from AWS

I can’t update pmm to 2.12 version