PMM disc space usage

Just installed an instance of PMM as a trial, monitoring 3 Percona XtraDB Cluster nodes.

After just 48 hours of statistics gathered, the Prometheus data storage has reached 9.7GiB. So, ~35GiB of storage to monitor just 3 MySQL servers for a week?

Seems a little excessive to me…

It’s 1s resolution by default and maximum statistics if all metrics are available.

Also there could be other factors.
For example, depending from number of tables - it can be a lot, and disabling per table stats (re-adding mysql:metrics with --disable-tablestats flag) will decrease disk usage dramatically.

If you don’t need 1s resolution you can set 5s https://www.percona.com/doc/percona-…ed-for-metrics

If 30 days retention is too much for disk space you can lower it https://www.percona.com/doc/percona-…for-prometheus

To be sure this is all metrics, I would ask you to measure by running:
docker exec -ti pmm-server du -sh /opt/prometheus/data

Thank you for your quick response.

As for number of tables, each of the cluster nodes has 620 tables, so a reasonable number I guess…


$ docker exec -ti pmm-server du -sh /opt/prometheus/data [9:38:42]
11G /opt/prometheus/data

I’m a bit of a Docker novice, do I need to create a new pmm-server container to set the environment options?

Yes, you need to re-create pmm-server container to pass other env variables, you can preserve the data one…

Also there is an option to change chunk encoding for metric storage which claims 50% saving in cost of 20% CPU overhead.
However, we have not made any tests yet but will do soon.