PMM Agent Not Monitoring PVC Mounted at /data/db in Percona Operator for MongoDB

Description:

Hi team,

I’m using the Percona Operator for MongoDB and have successfully enabled PMM integration across my clusters. All MongoDB metrics are working well. However, I’ve noticed that the PMM agent does not monitor disk usage for the PVC mounted at /data/db, which is the main data directory for mongod. Instead, it monitors the node’s root disk and other mounted paths like configuration volumes (e.g., SSL certs, keys), which are not helpful for tracking actual database storage usage.

Here is the list of devices that PMM cought:

"device": "/dev/nvme1n1p1",
"device": "/dev/nvme1n1p1",
"device": "/dev/nvme1n1p1",
"device": "/dev/root",
"device": "tmpfs",
"device": "tmpfs",
"device": "tmpfs",
"device": "/dev/nvme1n1p1",
"device": "/dev/nvme1n1p1",
"device": "/dev/nvme1n1p1",
"device": "/dev/root",
"device": "tmpfs",
"device": "tmpfs",
"device": "tmpfs",
"device": "/dev/nvme1n1p1",
"device": "/dev/nvme1n1p1",
"device": "/dev/nvme1n1p1",
"device": "/dev/root",
"device": "tmpfs",
"device": "tmpfs",
"device": "tmpfs",

What I need monitored is this volume:
/dev/nvme7n1 30G 13G 17G 43% /data/db

PMM config:

  pmm:
    enabled: true
    image: percona/pmm-client:2.43.1
    serverHost: pmm.domain.xyz
    mongodParams: --environment=sandbox --cluster=mycluster --replication-set=mycluster-rs0

Version:

PMM server and clients: 2.43.1

Percona Operator for MongoDB: 1.20.1

Percona server for MongoDB: 7.0.18

Questions:

  1. Is there a supported way to configure PMM (via the Operator or otherwise) to monitor PVC disk usage?

  2. How to monitor the k8s PVCs?

  3. Are there any best practices or recommended workarounds for this scenario?

Any guidance or examples would be greatly appreciated!

After further debugging, I’ve confirmed that node_exporter can detect all devices and report general metrics like node_disk_info, but critical metrics such as node_filesystem_free_bytes and node_filesystem_used_bytes are missing for the PVC mounted at /data/db.

bash-5.1$ df -hT
Filesystem     Type     Size  Used Avail Use% Mounted on
overlay        overlay  100G  9.2G   91G  10% /
tmpfs          tmpfs     64M     0   64M   0% /dev
/dev/root      erofs    308M  308M     0 100% /usr/local/sbin/modprobe
tmpfs          tmpfs     30G   12K   30G   1% /etc/mongodb-ssl
/dev/nvme0n1p1 xfs      100G  9.2G   91G  10% /etc/hosts
shm            tmpfs     64M     0   64M   0% /dev/shm
tmpfs          tmpfs     30G   12K   30G   1% /run/secrets/kubernetes.io/serviceaccount
tmpfs          tmpfs     30G  4.0K   30G   1% /run/secrets/eks.amazonaws.com/serviceaccount

It appears the root cause is that the PMM client sidecar container does not have access to the mounted device at /data/db. I attempted to modify the pmm.containerSecurityContext to elevate permissions, but that didn’t resolve the issue.

containerSecurityContext:
  privileged: true

If anyone has encountered a similar issue or has ideas on how to grant PMM client access to PVC-mounted volumes for proper disk usage monitoring, I’d greatly appreciate any guidance or suggestions!