PMM eat all memory

Hello,

I have problem with memory on PMM monitoring. As the attached picture monitoring consumes more and more memory. After restart monitoring the level of memory falls and the story begin again. Is this normal behavior for PMM? Now PMM have 6GB memory.

photoid=46609

How many clients do you have and how many time series are shown on Prometheus dashboard on Grafana?

time series is 249560 and i have 7 clients

Oh, this is a quite huge number of time series. You probably have table stats enabled for all the clients?

[url]https://pmmdemo.percona.com/graph/dashboard/db/prometheus[/url]
We have about 20 clients and this is only 50K time series.
From 32G of RAM, there is only half in use https://pmmdemo.percona.com/graph/dashboard/db/system-overview?var-interval=$__auto_interval&var-host=pmm-server

For your number of time series, I think 6G may not be enough. You would need much more.
It is still interesting what the bottom tables on your PMM server show here [url]https://pmmdemo.percona.com/graph/dashboard/db/prometheus[/url] ?
Why are there so many time series?

weber big thanks for your help.
I disable stats for most server and add RAM to 16GB. Time Series go down to 190000 Memory Chunks = 521440, HTTP Request 185.17 ms and HTTP Response Size 2.17KiB and work fine more then one week. Time series probably is so big because of the heavy load on databases. But i restart now again when stabilized at 90% memory and now Time Series is 80260

Probably I’ll add some more RAM

I have the same problem,the used memory keep growning until server crash,so I reboot the machine to release resources and restart pmm-server.

The process resident memory grow from 5g to 32g in 1 day. I’m worried that server will crash again.

Our server built in Virtual Machine,the specification is as below:
CPU : Xeon E5-2670 2.60GHz,8 core
Memory : 90G
Disk : SAS

Which has 84 client
version:1.1.1 with option : METRICS_MEMORY=42949672960,METRICS_RESOLUTION=5s

The other one:
CPU : Xeon E5-2670 2.60GHz,8 core
Memory : 62G
Disk : SAS

Which has 48 client.
version:1.1.1 with option : METRICS_MEMORY=32212254720,METRICS_RESOLUTION=5s

Most of client version is Percona 5.6.30.

To reduce time series,I set half of client tablestats=OFF,userstats=OFF,and remove mysql:queries monitor.

Do we need to require more resources ? for example, increasing CPU core from 8 to 16 or memory upgrading to 128GB

In addition, do u have any suggestions to pmm-server’s parameter configuration ?

please read [url]Storage | Prometheus for more details
can you try to recreate container with METRICS_MEMORY=26214400 ?

OK, I have recreated. Let me observe for a few days.
I’ll reply later. Thanks!

Since recreated and change METRICS_MEMORY to 26214400,one of the servers’ memory is still growing up to 86% of total memory,the other is slowly arising up to about 60%.
Should we add more physical resource like CPU or Memory for the servers?
Could we set this value:storage.local.max-chunks-to-persist to limit the usage of the memory ?
I’m confused why memory usage can’t stop growing…

METRICS_MEMORY 26214400 * 3 / 1024 / 1024 = 75Gb
according to graphs the first server stoped growing at 75Gb and it is expected.

you can tune METRICS_MEMORY value for second server according to formula
X Gb in RAM * 1024 * 1024 / 3 = METRICS_MEMORY

if tablestats=OFF I think it is not needed.

I got it! No wonder memory usage is approaching 75G.
I’ll adjust this value for second server.

Thank you very much!