Pmm-client is consuming Memory

Hi ,

Pmm client is consuming too much memory
we have 9GB RAM on mysql server

as i have disabled tablestats and userstats eventhough my client is eating too much memory

please suggest what i can optimize to reduce memory consumption.
or any other details do you requires to diagnose it.

Hi there, thanks for your question about PMM. There are a few pointers on this sticky post that might help you to identify the issues, https://www.percona.com/forums/questions-discussions/percona-monitoring-and-management/50690-pmm-troubleshooting-and-how-to-report-a-bug but meanwhile:
[LIST]
[]What version of PMM are you on? if it’s not the latest version could you upgrade?
[
]What environment are you operating in?
[]What are you monitoring? Are you monitoring several instances or just the one? MySQL or MongoDB? Are there any errors getting logged alongside the memory issues?
[
]Is this a new install, or had everything used to work well and now it’s started to go wrong?
[/LIST] Send in some of that information and I will see if I can get you more information.

Hi,

Find below details.
[LIST]
[*]What version of PMM are you on? if it’s not the latest version could you upgrade?
[/LIST]

we installed PMM using docker

PMM-Server version : v1.10.0
PMM-client Version : 1.12.0
[LIST]
[*]What environment are you operating in?
[/LIST]

Linux : CentOS Linux release 7.3.1611
[LIST]
[*]What are you monitoring? Are you monitoring several instances or just the one? MySQL or MongoDB? Are there any errors getting logged alongside the memory issues?
[/LIST]

we are monitoring everything except user and tablestats .

we are monitoring total 16 instances including
mysql : 6
MongoDb : 10

no other error found
[LIST]
[*]Is this a new install, or had everything used to work well and now it’s started to go wrong?
[/LIST]

one month back we started monitoring these server (mysql + Mongodb) but every week our database server is crashing due to memory issue.that is because of PMM.

docker create -v /opt/prometheus/data -v /opt/consul-data -v /var/lib/mysql -v /var/lib/grafana --name pmm-data percona/pmm-server:latest /bin/true
docker run -d -p 80:80 --volumes-from pmm-data --name pmm-server --restart always percona/pmm-server:latest

we are having memory issues on mysql server

Hi Gajendra , can you check what exact process is eating memory using https://www.percona.com/blog/2018/05/21/capturing-per-process-metrics-with-percona-monitoring-and-management-pmm/ . Because PMM Client consists of several processes and will be better to know what exact process id causing the problem

You can also use this command “ps aux | grep percona”

root@rocky:~# ps aux | grep percona
root 1709 0.0 0.0 4504 708 ? Ss Aug13 0:00 /bin/sh -c /usr/local/percona/pmm-client/node_exporter -collectors.enabled=diskstats,filefd,filesystem,loadavg,meminfo,netdev,netstat,stat,time,uname,vmstat -web.listen-address=10.11.13.141:42000 -web.auth-file=/usr/local/percona/pmm-client/pmm.yml -web.ssl-cert-file=/usr/local/percona/pmm-client/server.crt -web.ssl-key-file=/usr/local/percona/pmm-client/server.key >> /var/log/pmm-linux-metrics-42000.log 2>&1
root 1711 1.0 0.0 2459912 19312 ? Sl Aug13 7:11 /usr/local/percona/pmm-client/node_exporter -collectors.enabled=diskstats,filefd,filesystem,loadavg,meminfo,netdev,netstat,stat,time,uname,vmstat -web.listen-address=10.11.13.141:42000 -web.auth-file=/usr/local/percona/pmm-client/pmm.yml -web.ssl-cert-file=/usr/local/percona/pmm-client/server.crt -web.ssl-key-file=/usr/local/percona/pmm-client/server.key
root 1712 0.0 0.0 4504 788 ? Ss Aug13 0:00 /bin/sh -c /usr/local/percona/qan-agent/bin/percona-qan-agent >> /var/log/pmm-mysql-queries-0.log 2>&1
root 1715 0.0 0.0 147476 12332 ? Sl Aug13 0:25 /usr/local/percona/qan-agent/bin/percona-qan-agent
root 1734 0.0 0.0 4504 848 ? Ss Aug13 0:00 /bin/sh -c /usr/local/percona/pmm-client/mysqld_exporter -collect.auto_increment.columns=true -collect.binlog_size=true -collect.global_status=true -collect.global_variables=true -collect.info_schema.innodb_metrics=true -collect.info_schema.processlist=true -collect.info_schema.query_response_time=true -collect.info_schema.tables=true -collect.info_schema.tablestats=true -collect.info_schema.userstats=true -collect.perf_schema.eventswaits=true -collect.perf_schema.file_events=true -collect.perf_schema.indexiowaits=true -collect.perf_schema.tableiowaits=true -collect.perf_schema.tablelocks=true -collect.slave_status=true -web.listen-address=10.11.13.141:42002 -web.auth-file=/usr/local/percona/pmm-client/pmm.yml -web.ssl-cert-file=/usr/local/percona/pmm-client/server.crt -web.ssl-key-file=/usr/local/percona/pmm-client/server.key >> /var/log/pmm-mysql-metrics-42002.log 2>&1
root 1739 2.6 0.0 2459624 21752 ? Sl Aug13 18:30 /usr/local/percona/pmm-client/mysqld_exporter -collect.auto_increment.columns=true -collect.binlog_size=true -collect.global_status=true -collect.global_variables=true -collect.info_schema.innodb_metrics=true -collect.info_schema.processlist=true -collect.info_schema.query_response_time=true -collect.info_schema.tables=true -collect.info_schema.tablestats=true -collect.info_schema.userstats=true -collect.perf_schema.eventswaits=true -collect.perf_schema.file_events=true -collect.perf_schema.indexiowaits=true -collect.perf_schema.tableiowaits=true -collect.perf_schema.tablelocks=true -collect.slave_status=true -web.listen-address=10.11.13.141:42002 -web.auth-file=/usr/local/percona/pmm-client/pmm.yml -web.ssl-cert-file=/usr/local/percona/pmm-client/server.crt -web.ssl-key-file=/usr/local/percona/pmm-client/server.key
root 11459 0.0 0.0 14224 984 pts/0 S+ 08:05 0:00 grep --color=auto percona

You can see here how much memory different processess consume. Note while there is large virtual memory amount consumed for example by “node_exporter” it should not be the problem as resident amount of memory is very small.