How to clear data in PMM

I have separated AWS instance for PMM, CentOS 7, BTRFS

[root@mysql-aws-pmm ~]# docker version
Client:
Version: 17.03.1-ce
API version: 1.27
Go version: go1.7.5
Git commit: c6d412e
Built: Mon Mar 27 17:05:44 2017
OS/Arch: linux/amd64

Server:
Version: 17.03.1-ce
API version: 1.27 (minimum version 1.12)
Go version: go1.7.5
Git commit: c6d412e
Built: Mon Mar 27 17:05:44 2017
OS/Arch: linux/amd64
Experimental: false

/var/lib/docker is a simlink to /data/docker

[root@mysql-aws-pmm ~]# ls -l /var/lib | grep docker
lrwxrwxrwx. 1 root root 12 Апр 25 10:05 docker → /data/docker
[
/data/ mountpoint 50G size

[root@mysql-aws-pmm ~]# df -hl | grep data | grep dev
/dev/xvdb1 50G 3,4G 47G 7% /data

In official documentation about space requirement for PMM we can see:

3.3.2 What are the minimum system requirements for PMM?
• PMM Server
Any system which can run Docker version 1.12.6 or later.
It needs roughly 1 GB of storage for each monitored database node with data retention set to one week.
Minimum memory is 2 GB for one monitored database node, but it is not linear when you increase more nodes.
For example, data from 20 nodes should be easily handled with 16 GB.

For now i have 5 MySQL instances in PMM, and yesterday all space was eaten by PMM docker.
Because i was sure, that 50Gb enough for me, i do not monitored free space, but i do it now.
I reinstalled all from 0 yesterday, for now i have this statistics:

docker exec -it pmm-server bash

[root@69a113c27b55 opt]# date
Wed May 3 11:05:21 EEST 2017
[root@69a113c27b55 opt]# du -hs /var/lib/mysql
151M /var/lib/mysql
[root@69a113c27b55 opt]# du -hs /opt/prometheus/
1.6G /opt/prometheus/

[root@69a113c27b55 opt]# date
Wed May 3 11:46:33 EEST 2017
[root@69a113c27b55 opt]du -hs /var/lib/mysql
156M /var/lib/mysql
[root@69a113c27b55 opt]du -hs /opt/prometheus/
1.6G /opt/prometheus/

[root@69a113c27b55 opt]# date
Wed May 3 15:37:06 EEST 2017
[root@69a113c27b55 opt]# du -hs /var/lib/mysql
176M /var/lib/mysql
[root@69a113c27b55 opt]# du -hs /opt/prometheus/
2.0G /opt/prometheus/

We have next situation:
DB size increased for 25M in 4h 30m
Prometheus size increased for 400M in 4h 30m

In 24 hour it will be about 2.5Gb, so 50Gb will end in 20 days.
According to official documentation, it must not happent, because data rotate in 7 day cicle.
But in my situation something go wrong.

I need to understand, how to fix this problem.

sorry for the inconvenience,
I created a ticket for documentation improving [url]Log in - Percona JIRA

we have two types of databases inside a PMM container - prometheus and mysql (QAN).
usually, we see the fast growing of QAN(mysql) database, so it has a very small retention period - 8 days.
usually, prometheus database is small and don’t take a lot of free space (especially with --disable-tablestats flag)
but sometimes it is hard to predict mysqld_exporter response size, because it depends on load type.

I recommend to decrease METRICS_RETENTION option value,
see [url]Percona Monitoring and Management

Thanks!
Just for fun - is there someone have such big count of issues with PMM like we have?

no-no! you are lucky :wink:
I know about larger and more loaded installations and they don’t require any hacking/researching from PMM developers side :slight_smile:

Well, i hope our experience will be useful for PMM evolution

we really appreciate your feedback!

btw, Michael created blog post regarding disk space usage [url]How much disk space should I allocate for Percona Monitoring and Management? - Percona Database Performance Blog

There is login page opened, and my forum credential does not fit

oh, your are right, blog post still on proof-reading, it will be available soon

Hi aleksey.filippov , in case you didn’t see it, the blog post is live: [url]https://www.percona.com/blog/2017/05/04/how-much-disk-space-should-i-allocate-for-percona-monitoring-and-management/[/url]

Thanks, Michael!