Metrics loss on pmm-server

We are currently using PMM to experiment with the monitoring of out MySQL instances. We are using v. 1.0.5 for both client and servers.
Everything has run for quite a while, until we activated more clients (currently 13). From that point on we are sporadically missing all metrics from all clients for a variable amount of time.
Trying to dig down the issue, we excluded any network involvement, but looking at the prometeus log file within the container we noticed that any time we start missing data there are the following events reported:

As you can see from the attached images grabbed from Grafana which display numerous gaps, the events seem correlated.
It’s not only limited to Linux metrics, MySQL ones are missing as well for the same time intervals.
The VM running the Docker container has never been under high CPU or memory pressure.

Could you please help us to investigate the problem?

Thanks a lot in advance
AC

[ATTACH=CONFIG]n46379[/ATTACH]

[ATTACH=CONFIG]n46380[/ATTACH]

grafana_20161109.png

grafana_20161109_2.png

You need to increase memory limit for Prometheus (metrics):
[url]Percona Monitoring and Management

For 13 clients, I would recommend 2G-4G at least assuming the underlying host has 8G-16G.

This can be done by recreating pmm-server container with -e METRICS_MEMORY=XXX option.

Thanks a lot. I recreated the container passing the memory variable and we don’t loose metric points anymore. Sorry to have missed this point in the FAQ.

Cool, thanks for the feedback.