Metrics are not getting populated to PMM-server after successfully configuring the client with the Server

After running the commnad below, the pmm agent starts successfully but its not able to populate the metrics data. I also see the node getting registered on the PMM server as well as its visible in the Grafana dashboard. But the metrics data does not populate.

Below is the error from the syslog:

INFO[2022-04-14T15:25:35.092-07:00] 2022-04-14T22:25:35.092Z	error	VictoriaMetrics/lib/promscrape/scrapework.go:354error when scraping "http://127.0.0.1:42000/metrics?collect%5B%5D=buddyinfo&collect%5B%5D=cpu&collect%5B%5D=diskstats&collect%5B%5D=filefd&collect%5B%5D=filesystem&collect%5B%5D=loadavg&collect%5B%5D=meminfo&collect%5B%5D=meminfo_numa&collect%5B%5D=netdev&collect%5B%5D=netstat&collect%5B%5D=processes&collect%5B%5D=standard.go&collect%5B%5D=standard.process&collect%5B%5D=stat&collect%5B%5D=textfile.hr&collect%5B%5D=time&collect%5B%5D=vmstat" from job "node_exporter_agent_id_215daabc-96e7-4bdd-aa12-26a826c92c9e_hr-1s" with labels {agent_id="/agent_id/215daabc-96e7-4bdd-aa12-26a826c92c9e",agent_type="node_exporter",instance="/agent_id/215daabc-96e7-4bdd-aa12-26a826c92c9e",job="node_exporter_agent_id_215daabc-96e7-4bdd-aa12-26a826c92c9e_hr-1s",machine_id="/machine_id/cd7ecc10985f4f65a08f0ba525114dc0",node_id="/node_id/f09e4234-5f04-40e7-9a8f-c082d3851d08",node_name="ht-prod-tcdb2.spacex.corp",node_type="generic"}: error when scraping "http://127.0.0.1:42000/metrics?collect%5B%5D=buddyinfo&collect%5B%5D=cpu&collect%5B%5D=diskstats&collect%5B%5D=filefd&collect%5B%5D=filesystem&collect%5B%5D=loadavg&collect%5B%5D=meminfo&collect%5B%5D=meminfo_numa&collect%5B%5D=netdev&collect%5B%5D=netstat&collect%5B%5D=processes&collect%5B%5D=standard.go&collect%5B%5D=standard.process&collect%5B%5D=stat&collect%5B%5D=textfile.hr&collect%5B%5D=time&collect%5B%5D=vmstat" with timeout 1s: timeout  agentID=/agent_id/e5db1093-68ec-4623-992f-67e9ccf4daed component=agent-process type=vm_agent

There is not much information there. Can you please show how you set up PMM, how did you register the agent to the server, and how did you add mysql service to PMM?

1 Like

sudo yum install https://repo.percona.com/yum/percona-release-latest.noarch.rpm
Install the PMM Client package – yum install -y pmm2-client
Register PMM Server – pmm-admin config --server-url=‘https://admin:********@ht1-perc-mntr01:443’ --server-insecure-tls

I used the above commands to set up PMM client. I just registered the agent with the server. I have not yet added Mysql service.

1 Like

pmm-admin status
pmm-admin list
ps -Af | grep pmm

Can you do some basic sysadmin checking on if things are even running?

1 Like
[root@ht-prod-tcdb2 ~]# pmm-admin status
Agent ID: /agent_id/b01c0ffd-0cbb-46c9-9f03-d63d45ec2c78
Node ID : /node_id/f09e4234-5f04-40e7-9a8f-c082d3851d08

PMM Server:
	URL    : https://ht1-perc-mntr01:443/
	Version: 2.15.1

PMM Client:
	Connected        : true
	Time drift       : 102.469µs
	Latency          : 818.838µs
	pmm-admin version: 2.27.0
	pmm-agent version: 2.27.0
Agents:
	/agent_id/215daabc-96e7-4bdd-aa12-26a826c92c9e node_exporter Running
	/agent_id/e5db1093-68ec-4623-992f-67e9ccf4daed vmagent Running

[root@ht-prod-tcdb2 ~]# pmm-admin list
Service type        Service name        Address and port        Service ID

Agent type           Status           Metrics Mode        Agent ID                                              Service ID
pmm_agent            Connected                            /agent_id/b01c0ffd-0cbb-46c9-9f03-d63d45ec2c78         
node_exporter        Running          push                /agent_id/215daabc-96e7-4bdd-aa12-26a826c92c9e         
vmagent              Running          push                /agent_id/e5db1093-68ec-4623-992f-67e9ccf4daed

[root@ht-prod-tcdb2 ~]# ps -Af | grep pmm
root      2430  2058  0 11:11 pts/0    00:00:00 grep --color=auto pmm
root      3786     1  0 Apr14 ?        00:01:33 /usr/sbin/pmm-agent --config-file=/usr/local/percona/pmm2/config/pmm-agent.yaml
root      4273  3786 27 Apr14 ?        1-01:34:54 /usr/local/percona/pmm2/exporters/node_exporter --collector.bonding --collector.buddyinfo --collector.cpu --collector.diskstats --collector.entropy --collector.filefd --collector.filesystem --collector.hwmon --collector.loadavg --collector.meminfo --collector.meminfo_numa --collector.netdev --collector.netstat --collector.netstat.fields=^(.*_(InErrors|InErrs|InCsumErrors)|Tcp_(ActiveOpens|PassiveOpens|RetransSegs|CurrEstab|AttemptFails|OutSegs|InSegs|EstabResets|OutRsts|OutSegs)|Tcp_Rto(Algorithm|Min|Max)|Udp_(RcvbufErrors|SndbufErrors)|Udp(6?|Lite6?)_(InDatagrams|OutDatagrams|RcvbufErrors|SndbufErrors|NoPorts)|Icmp6?_(OutEchoReps|OutEchos|InEchos|InEchoReps|InAddrMaskReps|InAddrMasks|OutAddrMaskReps|OutAddrMasks|InTimestampReps|InTimestamps|OutTimestampReps|OutTimestamps|OutErrors|InDestUnreachs|OutDestUnreachs|InTimeExcds|InRedirects|OutRedirects|InMsgs|OutMsgs)|IcmpMsg_(InType3|OutType3)|Ip(6|Ext)_(InOctets|OutOctets)|Ip_Forwarding|TcpExt_(Listen.*|Syncookies.*|TCPTimeouts))$ --collector.processes --collector.standard.go --collector.standard.process --collector.stat --collector.textfile.directory.hr=/usr/local/percona/pmm2/collectors/textfile-collector/high-resolution --collector.textfile.directory.lr=/usr/local/percona/pmm2/collectors/textfile-collector/low-resolution --collector.textfile.directory.mr=/usr/local/percona/pmm2/collectors/textfile-collector/medium-resolution --collector.textfile.hr --collector.textfile.lr --collector.textfile.mr --collector.time --collector.uname --collector.vmstat --collector.vmstat.fields=^(pg(steal_(kswapd|direct)|refill|alloc)_(movable|normal|dma3?2?)|nr_(dirty.*|slab.*|vmscan.*|isolated.*|free.*|shmem.*|i?n?active.*|anon_transparent_.*|writeback.*|unstable|unevictable|mlock|mapped|bounce|page_table_pages|kernel_stack)|drop_slab|slabs_scanned|pgd?e?activate|pgpg(in|out)|pswp(in|out)|pgm?a?j?fault)$ --no-collector.arp --no-collector.bcache --no-collector.conntrack --no-collector.drbd --no-collector.edac --no-collector.infiniband --no-collector.interrupts --no-collector.ipvs --no-collector.ksmd --no-collector.logind --no-collector.mdadm --no-collector.mountstats --no-collector.netclass --no-collector.nfs --no-collector.nfsd --no-collector.ntp --no-collector.qdisc --no-collector.runit --no-collector.sockstat --no-collector.supervisord --no-collector.systemd --no-collector.tcpstat --no-collector.timex --no-collector.wifi --no-collector.xfs --no-collector.zfs --web.disable-exporter-metrics --web.listen-address=:42000
root      4345  3786  0 Apr14 ?        00:17:22 /usr/local/percona/pmm2/exporters/vmagent -envflag.enable=true -httpListenAddr=127.0.0.1:42001 -loggerLevel=INFO -promscrape.config=/tmp/vm_agent/agent_id/e5db1093-68ec-4623-992f-67e9ccf4daed/vmagentscrapecfg -remoteWrite.maxDiskUsagePerURL=1073741824 -remoteWrite.tlsInsecureSkipVerify=true -remoteWrite.tmpDataPath=/tmp/vmagent-temp-dir -remoteWrite.url=https://ht1-perc-mntr01:443/victoriametrics/api/v1/write
[root@ht-prod-tcdb2 ~]#
1 Like

Everything looks correct and appears to be running. But your CPU graphs are empty?

1 Like

Yes. There is no graph getting populated. And I shared the error messages from /var/log/messages on the top

1 Like

Can you run ss -nltp | grep 42000 and see if PMM is listening on that port? If so, you probably have some firewall which is blocking PMM from scraping its own data.

1 Like
ss -nltp | grep 42000
LISTEN     0      128       [::]:42000                 [::]:*                   users:(("node_exporter",pid=4273,fd=3))
`Preformatted text``Preformatted text`

its listening.

1 Like

Does this return anything?

curl http://127.0.0.1:42000/metrics?collect%5B%5D=buddyinfo&collect%5B%5D=cpu

If not, then you have firewall rules blocking localhost.

1 Like

It says Invalid Username or password…

curl http://127.0.0.1:42000/metrics?collect

Invalid username or password
1 Like

Hello, I identified the issue. The node_exported is timing out because there are so too many virtual filesystems and the exporter is timing out while collecting those metrics.

Command I ran for that - lsof -p $(pidof node_exporter)

i can see all the virtual filesystems… and a simple df -k takes more than 5s to respond.

Is there any way to fix the timeout parameter for the exporter?

1 Like