I have an instance of pmm-client running on a higher end machine with a full traffic load (good target for analysis). For this instance the mysqld_exporter process is holding onto 28GB of RAM! I don’t know if this was true initially as I hadn’t checked after install, it has been running for about a month. Even after restarting the pmm processes it still resumes at this threshold. Is this a bug or something I can configure? at this usage level it makes the tool fairly infeasible to roll out do its resource requirements. I didn’t really expect the monitoring to have higher RAM usage then the mysqld process.
Can you highlight which version of PMM client are you running ? This can be done by:
root@blinky:/var/lib/mysql# pmm-admin -v
The 28GB consumed by exporter is surely something not expected. Can you show ps aux | grep mysqld_exporter output so we can see more details
The Version is 1.0.4 for both the client and server.I’ll get you a more detailed view after its been running for some time, after launch it looks normal for a few minutes as far as memory consumption is concerned.
root 21601 0.0 0.0 15340 5992 ? Ssl 14:26 0:00 /usr/local/percona/pmm-client/mysqld_exporter -collect.auto_increment.columns=true -collect.binlog_size=true -collect.global_status=true -collect.global_variables=true -collect.info_schema.innodb_metrics=true -collect.info_schema.processlist=true -collect.info_schema.query_response_time=true -collect.info_schema.tables=true -collect.info_schema.tablestats=true -collect.info_schema.userstats=true -collect.perf_schema.eventswaits=true -collect.perf_schema.file_events=true -collect.perf_schema.indexiowaits=true -collect.perf_schema.tableiowaits=true -collect.perf_schema.tablelocks=true -collect.slave_status=true -web.listen-address=x.x.x.x:42002
can you update pmm client and pmm server to the latest version 1.1.1
and check how it is going after update?
Sure I have upgraded and am testing in an environment less likely to produce headaches if it has issues. Unfortunately none of these environments have had the same issue either to this point.