Unable to start linux:metrics and mysql:metrics

pmm-admin list shows:

linux:metrics and mysql:metrics not running

when I try to add them:

pmm-admin add mysql:metrics I get a:

Error adding MySQL metrics: there is already one instance with this name under monitoring.

Where is this coming from?

What did happen before they stopped? Are they stopped right after adding?
For more information, check out /var/log/pmm-*.log
You can also try to start them pmm-admin start --all and see the logs.

I too face the same problem ( linux:metrics and mysql:metrics are not running).

While Checking the logs I could see below “fatal” error message.

For mysql:metrics ( pmm-mysql-metrics-42002.log )

time="2016-10-18T14:29:31Z" level=info msg="Starting mysqld_exporter (version=1.0.5, branch=master, revision=3c95bb2c647443430db2337627d5f677665f41c7)" source="mysqld_exporter.go:631"

time="2016-10-18T14:29:31Z" level=info msg="Build context (go=go1.7.1, user=, date=)" source="mysqld_exporter.go:632"

time="2016-10-18T14:29:31Z" level=info msg="Listening on <IP ADDRESS>:42002" source="mysqld_exporter.go:666"

time="2016-10-18T14:29:31Z" level=fatal msg="listen tcp <IP ADDRESS>:42002: bind: cannot assign requested address" source="mysqld_exporter.go:667"

For linux:metrics ( pmm-linux-metrics-42000.log )

time="2016-10-18T14:29:15Z" level=info msg="Starting node_exporter (version=0.12.0, branch=master, revision=df8dcd2)" source="node_exporter.go:135"

time="2016-10-18T14:29:15Z" level=info msg="Build context (go=go1.6.2, user=root&#64;ff68505a5469, date=20160505-22:14:18)" source="node_exporter.go:136"

time="2016-10-18T14:29:15Z" level=info msg="Enabled collectors:" source="node_exporter.go:155"

time="2016-10-18T14:29:15Z" level=info msg=" - diskstats" source="node_exporter.go:157"

time="2016-10-18T14:29:15Z" level=info msg=" - loadavg" source="node_exporter.go:157"

time="2016-10-18T14:29:15Z" level=info msg=" - netstat" source="node_exporter.go:157"

time="2016-10-18T14:29:15Z" level=info msg=" - stat" source="node_exporter.go:157"

time="2016-10-18T14:29:15Z" level=info msg=" - uname" source="node_exporter.go:157"

time="2016-10-18T14:29:15Z" level=info msg=" - vmstat" source="node_exporter.go:157"

time="2016-10-18T14:29:15Z" level=info msg=" - filesystem" source="node_exporter.go:157"

time="2016-10-18T14:29:15Z" level=info msg=" - meminfo" source="node_exporter.go:157"

time="2016-10-18T14:29:15Z" level=info msg=" - netdev" source="node_exporter.go:157"

time="2016-10-18T14:29:15Z" level=info msg=" - time" source="node_exporter.go:157"

time="2016-10-18T14:29:15Z" level=info msg="Listening on <IP ADDRESS>:42000" source="node_exporter.go:176"

time="2016-10-18T14:29:15Z" level=fatal msg="listen tcp <IP ADDRESS>:42000: bind: cannot assign requested address" source="node_exporter.go:179"

They never start. It looks like the bind fails for ports 42000 and 42002.

for mysql:metrics

level=fatal msg=“listen tcp 50.57.195.196:42002: bind: cannot assign requested address” source=“mysqld_exporter.go:667”

mysql:queries starts fine

for linux:metrics:

level=info msg=“Listening on 50.57.195.196:42000” source=“node_exporter.go:176”
level=fatal msg=“listen tcp 50.57.195.196:42000: bind: cannot assign requested address” source=“node_exporter.go:179”

  • pmm-admin is running as root
  • trying a different port with --server-port does not fix the problem
  • using netstat and other tools - nothing appears to be bound to 42000 or 42002

I found that in the yml file if I replace the IP address with 127.0.0.1 the linux:metrics and mysql:metrics start and pmm-admin list now shows all three: linux:metrics, mysql:queries and mysql:metrics but…pmm-admin check-network indicates a problem for linux:metrics and mysql:metrics

Lastly, I get no metrics-monitor graph data on the pmm-server side - the graphs are all empty. I do get query analytics though.

Looking at the diagram on page 4 of pmm-client and pmm-server - does pmm-server need to connect to the ports on the pmm-clients? In which case I will need to open the firewall. For query-analytics - it looks like the connection goes from the client to the server only.

So my theory is that query analytics work because the client always binds to 127.0.0.1:42001 and mysql:metrics and linux:metrics do not because I cannot bind to the port and pmm-server cannot connect to the pmm-client (even if the bind did work.) ???

Hi rjennings,

I too faced a similar problem and was able to resolve it. To make sure it is the same issue which I faced, can you let me know if the mysql server where your pmm agent is running is an AWS EC2 server ?

If that is the case, then the error might be due to the binding of public IP of your EC2 instance. You can refer this for more details. To fix this issue, follow the below steps,
[LIST]
[]Edit “/usr/local/percona/pmm-client/pmm.yml” file and change the value of your client_address to the private ip of your instance
[
]Restart the required service again
[/LIST]

[COLOR=#FF0000]Edit: Even after changing the client_address to my private ip ( to fix the bind issue ) I still did not see any metrics-monitor data in Graphana. One workaround ( which I still feel is not the right solution ) is to first add all the mysql service using the public ip which is set default once you configure the PMM client. As expected, the mysql:metrics and linux:metrics will not run due to the bind issue. Next, edit “/usr/local/percona/pmm-client/pmm.yml” again and change the client_address to your private ip and client_name to a different name than before. Now restart all the services by running ‘pmm-admin restart --all. You’ll get an error as below.

# [COLOR=#0000FF]pmm-admin restart --all
We have found system services disconnected from PMM server.
Usually, this happens when data container is wiped before all monitoring services are removed or client is uninstalled.

Orphaned local services: pmm-linux-metrics-42000, pmm-mysql-metrics-42002, pmm-mysql-queries-42001

To continue, run 'pmm-admin repair' to remove orphaned services.

Run the below command to clear the orphaned process.

[COLOR=#0000CD]pmm-admin repair

Now, there wont be any service running when you check the status by executing ‘pmm-admin list’

As a last step, add the mysql service again.

[COLOR=#0000FF]pmm-admin add mysql --user username --password passwd

Now, when you check the graph, you’ll see the metrics getting updated. In the hosts list, you’ll see the name as when you added the service keeping the client ip with the servers public ip ( This will also create a duplicate host entry in QAN dashboard under hosts ).

I basically followed the above steps and got it working. But for sure this is not the right approach. Not sure if this is a bug. Maybe someone from percona team can take this forward.

Hi,
No, my Percona MySQL server and pmm-agent is not running on an ec2 instance but where I am running the pmm-server in the docker container is an ec2 instance.

I was able to fix the bind issue so pmm-admin list now shows all three running now but ‘pmm-admin check-network’ still shows a problem with linux:metrics and mysql:metrics.

I have firewall policies to allow inbound tcp connections to ports 42000 and 42002 from the pmm server to the pmm client. I also added network policies on ec2 to allow the other direction. The later, I believe, is what is allowing the mysql:queries data to show on the pmm server.

mysql:queries service should always listen 127.0.0.1, this is okay.

All other services ending with :metrics should listen client address which is automatically detected when you do pmm-admin config --server-address ... and saved into pmm.yml.

Sometimes, the detected client address could be wrong, when you have PMM client behind NAT or some kind of proxy.
In that case, you can specify client address explicitly, no need to edit pmm.yml and it’s not recommended to do so (there are command flags to make changes for each option).
For example: pmm-admin config --server-address 1.2.3.4 --client-address 5.6.7.8

To change client address, you need to remove all services first.

Also client address can’t be 127.0.0.1 because Prometheus (metrics system) from inside the container needs to connect to PMM client to get metrics and 127.0.0.1 is container’s address itself. Thus, it should be reachable address, usually a private one.

I would say the following might be the problem: both (MySQL server with pmm-client) AND (pmm-server) are each behind a NAT firewall. I can open ports 42000, 42001, 42003 in both directions on both firewalls but what IP address each side has is the question. Each side needs the public IP address of the other but the agents need to bind to their local IP address. I tried configuring the pmm-client settings using the IP address of the public IP. Then I edit the yml file and replace the client IP address to the private IP - start the agents and now the agents bind to ports 42000 and 42003 (42001 always binds to 127.0.0.1). Next I try to edit the yml file once again to put back the public IP - in case it’s needed for pmm-admin check-network - for example.

All of this does not result in the desired outcome.

The NAT routes are set correctly. Seems like additional config settings would help separating local and remote IP addresses.

Yes, you are right, if everything is behind its own NAT, each service has to talk to other by public IP.

And I think the problem is as follow: PMM client listens a private IP, PMM server tries to use it for public connection to the client but fails as it requires a public IP for network connectivity. You can’t use public IP on the client because it is not bound locally and replaced by NAT.

PMM client address should be both bound locally on the client (service should be able to listen on it) and accessible by PMM server because we use it as an endpoint address.

Unfortunately, we don’t split client address between how it’s bound locally and how it’s accessed externally.

The solution can be a bit hacky one: you can use external IP as client address, so PMM server connects to the client fine (you will also need port forwarding to the client’s private address:port). Then add service using pmm-admin. Then edit service file manually and replace external IP with private IP, restart service. This way it will work but will not persist remove/re-add using pmm-admin.

If all my assumptions are correct, please confirm. We may have some solution like adding a new option/flag to pmm-admin, e.g. --client-private-address etc.

Yes, that is exactly what I said. I would very much like to see that new flag added so we can get this up and running. (We are Peronca clients btw - if that has an weight.)

I’ll try the sequence you mentioned - I could swear that I tried that (and then some) with no luck.

Unfortunately, that technique does not work. If you mean “edit service file” as the pmm.yml file then no. When you try to restart (actually it’s start because it fails to start at first and was never running) if you edit the yml file with the private IP - the service still tries to use the public IP (what was in pmm.yml when pmm-admin config --server was issued.)

Error adding linux metrics: another client with the same name ‘’ but different address detected.

This client address is <NEW PRIVATE IP>, the other one - <OLD PUBLIC IP>.
Re-configure this client with the different name using 'pmm-admin config' command.

Taking it one step further, If you take your trick plus the previous poster’s trick of ALSO changing the hostname to a different one:

We have found system services disconnected from PMM server.
Usually, this happens when data container is wiped before all monitoring services are removed or client is uninstalled.

Orphaned local services: pmm-mysql-metrics-42002, pmm-mysql-queries-42001

To continue, run 'pmm-admin repair' to remove orphaned services.

One such fix - is to setup pmm-client using the private IP address and then on the pmm-server host run:

iptables -t nat -I OUTPUT -d -j DNAT --to-destination

I’m now getting some of the graph data for mysql but none for system related graphs.

Is there anything else I can do to get the other graphs producing data?

Under editing the service file I meant /etc/init.d/pmm-mysql-metrics-42002 or /etc/systemd/system/pmm-mysql-metrics-42002.service or /etc/init/pmm-mysql-metrics-42002.conf depending from your service manager. Please don’t edit pmm.yml.

And again “you will also need port forwarding to the client’s private address/port” which is important as PMM can’t handle NAT (this is what you did on the last post).

For system graphs, you need to add a rule for other port as well (most likely, 42000).

For the above problems pmm-client 1.0.7 solves this problem with the new --bind-address option.

Thanks, great!