PMM Server not receiving pod resource data from in kubernetes cluster


My team has been trying to implement PMM Server to monitor our Percona XtraDB Clusters through PMM Clients. Most of the data about the MySQL instances are being displayed but not the CPU, RAM, disk and network information on the main dashboard.

Meanwhile, on the dashboard template MySQL/MySQL Instance Summary, all of the graphs have data, except Node Summary.

PMM Server image: percona/pmm-server:2.31.0
PMM Clients image: percona/pmm-client:2.28.0
K8s: v1.22.14-gke.300

Is this a RBAC problem, or a YAML misconfiguration? We are using Helm to install the server, and the generated templates to install the clusters. A couple of changes we did with the templates are the version set to 1.10.0 and the containerSecurityContext for the pmm-client. The last one was required for Kubernetes to recognize the user as nonRoot.

Here are the files used for both Server and Clusters:
pmm-values.yaml.pdf (63.0 KB)
pxc-db-chart.yaml.pdf (75.2 KB)

1 Like

I’m not sure if it is related, but there is an error popping up from the pmm-client:

The serverUser and secrets.passwords were not altered.

1 Like

I have to phone a friend here…K8s is my kryptonite!

Here’s what I do know:
In a typical configuration, all the data you’re missing comes from Node Exporter…so when I see that blank but database stats are flowing that means that the instance was added as a “remote instance” (where the PMM server is reaching out to the DB service port and scraping what data it can from port 3306 in mysql’s case) but since there’s not an external service port available for node stats, they’re just blank.

I just browsed through the release notes and Helm support wasn’t officially added until 2.29.0 so you might first try to upgrade your client image to at least 2.29.0 but for maximum compatibility I’d go right to 2.31.0 to match your PMM server.

I know that there was some changes made (or to be made) about the way PMM handles K8s monitoring since the node_exporter’s original premise was monitoring the host that a DB is running on but you need more levels of data (the pods available resources and consumption, the nodes available resource and consumption and the clusters available resources and consumption) that node_exporter wasn’t able to look at (and may still not be able to…not quite sure).

I’ll put this post in front of our K8s experts to see if they have ideas and/or suggestions on where to start looking.


Hi @SCHrodrigo ,

did you auth that pmm client as described in PXC doc?

Operator needs to know PMM auth.
As client is not connected - there are no metrics.

In PMM DBaaS when you create DB there - it configures everything for you.

If you add that k8s cluster to the PMM DBaaS - PMM would install VM operator and would start to gather cAdvisor metrics as well. From 2.32 kube-state metrics as well.



@Denys_Kondratenko and @Steve_Hoffman, thank you for answering!

We combined both of the informations, we set the versions to be the latest and changed the authorization to be token-based. Since then, the registration error stopped popping up, and the metrics started rolling in.

Thank you!


nice! Thanks you for the update!