PMM with Mongo Operator reports wrong disk info

Hello there,

First of all thank you for building these amazing products, I’m sure you get it all the time but a bit of repetition doesn’t hurt!

I’m having a slight issue where the disk report created by PMM running with the Percona Operator for MongoDB doesn’t give accurate results.

In my case, I’m running a simple non-sharded 3 nodes cluster with a 128GB persistent volume claim. Each is running on its own dedicated Kubernetes node, which have a 20GB disk.

Two things:

  • The PMM dashboard doesn’t report those 3 volume claims at all, although it is the most important to ensure the Mongo cluster doesn’t go down when it becomes saturated.
  • It shows the root 20GB volume multiple times, as it is mounted on different points, namely /etc/hostname, /etc/hosts and /etc/resolv.conf

I looked for options on the pmm-agent that would help fix that issue, but without success.

Unless I missed it, I was also wondering why not make use of db.stats() which seems to report a fsTotalSize close to what the PVC size is in order to create that metric and possible alerting if it reaches dangerous levels?

It is worth noting that although we also have the duplication issue of the Kubernetes mountpoints for pmm-server, it does show the PVC volume with /srv.

Thanks in advance for the help!

Thank you for raising it! We need to make sure that PMM is container/k8s aware and will get the disk usage, CPU, RAM from both host and container.
We are working on it. ETA is this quarter (Q1 2023).
I’m not sure if we are going to use db.stats or focus on infrastructure, because we need to solve the same problem for other operators.