How to change modify pmm2 client configuration, disable some collect stats

It appears that pmm2 client is spamming /var/log/messages with exceptions. Is there any way to control which stats are being collected my pmm-agent for mysql:

/var/log/messages:
Apr 28 07:39:33 dev-evvio-db01 pmm-agent: #033[36mINFO#033[0m[2020-04-28T11:39:33.764+00:00] time=“2020-04-28T11:39:33Z” level=error msg=“Error scraping for collect.engine_tokudb_status: Error 1286: Unknown storage engine ‘TOKUDB’” source=“exporter.go:116”  #033[36magentID#033[0m=/agent_id/e7621b6f-5138-4612-ad2c-a975d3bc0af8 #033[36mcomponent#033[0m=agent-process #033[36mtype#033[0m=mysqld_exporterApr 28 07:39:33 dev-evvio-db01 pmm-agent: #033[36mINFO#033[0m[2020-04-28T11:39:33.781+00:00] time=“2020-04-28T11:39:33Z” level=error msg=“Error scraping for collect.heartbeat: Error 1146: Table ‘heartbeat.heartbeat’ doesn’t exist” source=“exporter.go:116”  #033[36magentID#033[0m=/agent_id/e7621b6f-5138-4612-ad2c-a975d3bc0af8 #033[36mcomponent#033[0m=agent-process #033[36mtype#033[0m=mysqld_exporterApr 28 07:40:00 dev-evvio-db01 pmm-agent: #033[36mINFO#033[0m[2020-04-28T11:40:00.071+00:00] Sending 32 buckets.                           #033[36magentID#033[0m=/agent_id/f10f8686-3b62-48f4-926d-798c9aef58a8 #033[36mcomponent#033[0m=agent-builtin #033[36mtype#033[0m=qan_mysql_perfschema_agent

The mysql_exporter process is clearly attempting to collect the following stats, including the ones that are failing: 

–collect.heartbeat
–collect.engine_tokudb_status

# ps aux | grep mysqld_exporterroot      1546  0.8  0.2 114640 21396 ?        Sl   10:31   0:39 /usr/local/percona/pmm2/exporters/mysqld_exporter --collect.auto_increment.columns --collect.binlog_size --collect.custom_query.hr --collect.custom_query.hr.directory=/usr/local/percona/pmm2/collectors/custom-queries/mysql/high-resolution --collect.custom_query.lr --collect.custom_query.lr.directory=/usr/local/percona/pmm2/collectors/custom-queries/mysql/low-resolution --collect.custom_query.mr --collect.custom_query.mr.directory=/usr/local/percona/pmm2/collectors/custom-queries/mysql/medium-resolution --collect.engine_innodb_status --collect.engine_tokudb_status --collect.global_status --collect.global_variables --collect.heartbeat --collect.info_schema.clientstats --collect.info_schema.innodb_cmp --collect.info_schema.innodb_cmpmem --collect.info_schema.innodb_metrics --collect.info_schema.innodb_tablespaces --collect.info_schema.processlist --collect.info_schema.query_response_time --collect.info_schema.tables --collect.info_schema.tablestats --collect.info_schema.userstats --collect.perf_schema.eventsstatements --collect.perf_schema.eventswaits --collect.perf_schema.file_events --collect.perf_schema.file_instances --collect.perf_schema.indexiowaits --collect.perf_schema.tableiowaits --collect.perf_schema.tablelocks --collect.slave_status --collect.standard.go --collect.standard.process --exporter.conn-max-lifetime=55s --exporter.global-conn-pool --exporter.max-idle-conns=3 --exporter.max-open-conns=3 --web.listen-address=:42001root      6111  0.0  0.0 112712   988 pts/0    S+   11:45   0:00 grep --color=auto mysqld_exporter

@lvit01this is actually a known issue: https://jira.percona.com/browse/PMM-4665 but hasn’t been picked up for development yet.  I checked my instance and I too am seeing the errors on 2.4 (client and server).  If you don’t mind registering for Jira and adding a comment it will help it bubble up in the priority list.  

Thanks Steve. Yes, I will add a comment in Jira as well.

I have similar needs recently, and I want to remove the -collect.stats_memory_metrics parameter of proxysql_exporter.I finally found that the pmm-agent of the monitored server actually went back to pmm-server to get the exporter parameters and then started the exporter

<br>/usr/sbin/pmm-agent --config-file=/usr/local/percona/pmm2/config/pmm-agent.yaml --debug<br><br>DEBU[2020-06-15T14:49:22.992+08:00] Logs redactor disabled in debug mode.&nbsp; &nbsp; &nbsp; &nbsp; &nbsp;agentID=/agent_id/b1177726-6318-42ec-a4f4-893eb478a9e5 component=agent-process type=node_exporterDEBU[2020-06-15T14:49:22.992+08:00] Starting: /usr/local/percona/pmm2/exporters/proxysql_exporter -collect.mysql_connection_list -collect.mysql_connection_pool -collect.mysql_status -collect.stats_memory_metrics -web.listen-address=:42001 (environment: DATA_SOURCE_NAME=stats:secret@tcp(127.0.0.1:6032)/?timeout=1s, HTTP_AUTH=pmm:/agent_id/c118aa18-3201-41e7-80b8-54c988eea699).&nbsp; agentID=/agent_id/c118aa18-3201-41e7-80b8-54c988eea699 component=agent-process type=proxysql_exporterDEBU[2020-06-15T14:49:22.992+08:00] Logs redactor disabled in debug mode.&nbsp; &nbsp; &nbsp; &nbsp; &nbsp;agentID=/agent_id/c118aa18-3201-41e7-80b8-54c988eea699 component=agent-process type=proxysql_exporterINFO[2020-06-15T14:49:22.992+08:00] Sending status: STARTING (port 42000).&nbsp; &nbsp; &nbsp; &nbsp; agentID=/agent_id/b1177726-6318-42ec-a4f4-893eb478a9e5 component=agent-process type=node_exporter<br><br><br><br><br><br>DEBU[2020-06-15T14:49:22.991+08:00] Received message (3003 bytes):id: 1set_state: &lt;&nbsp; agent_processes: &lt;&nbsp; &nbsp; key: "/agent_id/b1177726-6318-42ec-a4f4-893eb478a9e5"&nbsp; &nbsp; value: &lt;&nbsp; &nbsp; &nbsp; type: NODE_EXPORTER&nbsp; &nbsp; &nbsp; template_left_delim: "{{"&nbsp; &nbsp; &nbsp; template_right_delim: "}}"&nbsp; &nbsp; &nbsp; args: "--collector.bonding"&nbsp; &nbsp; &nbsp; args: "--collector.buddyinfo"&nbsp; &nbsp; &nbsp; args: "--collector.cpu"&nbsp; &nbsp; &nbsp; args: "--collector.diskstats"&nbsp; &nbsp; &nbsp; args: "--collector.entropy"&nbsp; &nbsp; &nbsp; args: "--collector.filefd"&nbsp; &nbsp; &nbsp; args: "--collector.filesystem"&nbsp; &nbsp; &nbsp; args: "--collector.hwmon"&nbsp; &nbsp; &nbsp; args: "--collector.loadavg"&nbsp; &nbsp; &nbsp; args: "--collector.meminfo"&nbsp; &nbsp; &nbsp; args: "--collector.meminfo_numa"&nbsp; &nbsp; &nbsp; args: "--collector.netdev"&nbsp; &nbsp; &nbsp; args: "--collector.netstat"&nbsp; &nbsp; &nbsp; args: "--collector.netstat.fields=^(.*_(InErrors|InErrs|InCsumErrors)|Tcp_(ActiveOpens|PassiveOpens|RetransSegs|CurrEstab|AttemptFails|OutSegs|InSegs|EstabResets|OutRsts|OutSegs)|Tcp_Rto(Algorithm|Min|Max)|Udp_(RcvbufErrors|SndbufErrors)|Udp(6?|Lite6?)_(InDatagrams|OutDatagrams|RcvbufErrors|SndbufErrors|NoPorts)|Icmp6?_(OutEchoReps|OutEchos|InEchos|InEchoReps|InAddrMaskReps|InAddrMasks|OutAddrMaskReps|OutAddrMasks|InTimestampReps|InTimestamps|OutTimestampReps|OutTimestamps|OutErrors|InDestUnreachs|OutDestUnreachs|InTimeExcds|InRedirects|OutRedirects|InMsgs|OutMsgs)|IcmpMsg_(InType3|OutType3)|Ip(6|Ext)_(InOctets|OutOctets)|Ip_Forwarding|TcpExt_(Listen.*|Syncookies.*|TCPTimeouts))$"&nbsp; &nbsp; &nbsp; args: "--collector.processes"&nbsp; &nbsp; &nbsp; args: "--collector.standard.go"&nbsp; &nbsp; &nbsp; args: "--collector.standard.process"&nbsp; &nbsp; &nbsp; args: "--collector.stat"&nbsp; &nbsp; &nbsp; args: "--collector.textfile.directory.hr=/usr/local/percona/pmm2/collectors/textfile-collector/high-resolution"&nbsp; &nbsp; &nbsp; args: "--collector.textfile.directory.lr=/usr/local/percona/pmm2/collectors/textfile-collector/low-resolution"&nbsp; &nbsp; &nbsp; args: "--collector.textfile.directory.mr=/usr/local/percona/pmm2/collectors/textfile-collector/medium-resolution"&nbsp; &nbsp; &nbsp; args: "--collector.textfile.hr"&nbsp; &nbsp; &nbsp; args: "--collector.textfile.lr"&nbsp; &nbsp; &nbsp; args: "--collector.textfile.mr"&nbsp; &nbsp; &nbsp; args: "--collector.time"&nbsp; &nbsp; &nbsp; args: "--collector.uname"&nbsp; &nbsp; &nbsp; args: "--collector.vmstat"&nbsp; &nbsp; &nbsp; args: "--collector.vmstat.fields=^(pg(steal_(kswapd|direct)|refill|alloc)_(movable|normal|dma3?2?)|nr_(dirty.*|slab.*|vmscan.*|isolated.*|free.*|shmem.*|i?n?active.*|anon_transparent_.*|writeback.*|unstable|unevictable|mlock|mapped|bounce|page_table_pages|kernel_stack)|drop_slab|slabs_scanned|pgd?e?activate|pgpg(in|out)|pswp(in|out)|pgm?a?j?fault)$"&nbsp; &nbsp; &nbsp; args: "--no-collector.arp"&nbsp; &nbsp; &nbsp; args: "--no-collector.bcache"&nbsp; &nbsp; &nbsp; args: "--no-collector.conntrack"&nbsp; &nbsp; &nbsp; args: "--no-collector.drbd"&nbsp; &nbsp; &nbsp; args: "--no-collector.edac"&nbsp; &nbsp; &nbsp; args: "--no-collector.infiniband"&nbsp; &nbsp; &nbsp; args: "--no-collector.interrupts"&nbsp; &nbsp; &nbsp; args: "--no-collector.ipvs"&nbsp; &nbsp; &nbsp; args: "--no-collector.ksmd"&nbsp; &nbsp; &nbsp; args: "--no-collector.logind"&nbsp; &nbsp; &nbsp; args: "--no-collector.mdadm"&nbsp; &nbsp; &nbsp; args: "--no-collector.mountstats"&nbsp; &nbsp; &nbsp; args: "--no-collector.netclass"&nbsp; &nbsp; &nbsp; args: "--no-collector.nfs"&nbsp; &nbsp; &nbsp; args: "--no-collector.nfsd"&nbsp; &nbsp; &nbsp; args: "--no-collector.ntp"&nbsp; &nbsp; &nbsp; args: "--no-collector.qdisc"&nbsp; &nbsp; &nbsp; args: "--no-collector.runit"&nbsp; &nbsp; &nbsp; args: "--no-collector.sockstat"&nbsp; &nbsp; &nbsp; args: "--no-collector.supervisord"&nbsp; &nbsp; &nbsp; args: "--no-collector.systemd"&nbsp; &nbsp; &nbsp; args: "--no-collector.tcpstat"&nbsp; &nbsp; &nbsp; args: "--no-collector.timex"&nbsp; &nbsp; &nbsp; args: "--no-collector.wifi"&nbsp; &nbsp; &nbsp; args: "--no-collector.xfs"&nbsp; &nbsp; &nbsp; args: "--no-collector.zfs"&nbsp; &nbsp; &nbsp; args: "--web.disable-exporter-metrics"&nbsp; &nbsp; &nbsp; args: "--web.listen-address=:{{ .listen_port }}"&nbsp; &nbsp; &nbsp; env: "HTTP_AUTH=pmm:/agent_id/b1177726-6318-42ec-a4f4-893eb478a9e5"&nbsp; &nbsp; &gt;&nbsp; &gt;&nbsp; agent_processes: &lt;&nbsp; &nbsp; key: "/agent_id/c118aa18-3201-41e7-80b8-54c988eea699"&nbsp; &nbsp; value: &lt;&nbsp; &nbsp; &nbsp; type: PROXYSQL_EXPORTER&nbsp; &nbsp; &nbsp; template_left_delim: "{{"&nbsp; &nbsp; &nbsp; template_right_delim: "}}"&nbsp; &nbsp; &nbsp; args: "-collect.mysql_connection_list"&nbsp; &nbsp; &nbsp; args: "-collect.mysql_connection_pool"&nbsp; &nbsp; &nbsp; args: "-collect.mysql_status"&nbsp; &nbsp; &nbsp; args: "-collect.stats_memory_metrics"&nbsp; &nbsp; &nbsp; args: "-web.listen-address=:{{ .listen_port }}"&nbsp; &nbsp; &nbsp; env: "DATA_SOURCE_NAME=stats:secret@tcp(127.0.0.1:6032)/?timeout=1s"&nbsp; &nbsp; &nbsp; env: "HTTP_AUTH=pmm:/agent_id/c118aa18-3201-41e7-80b8-54c988eea699"&nbsp; &nbsp; &nbsp; redact_words: "stats"&nbsp; &nbsp; &gt;&nbsp; &gt;&gt;&nbsp; component=channel<br>
see 
https://github.com/percona/pmm-managed/blob/v2.4.0/services/agents/mysql.go#L45

So there is no particularly good way, I finally replaced the pmm-managed in the docker container with the pmm-managed recompiled by modifying the code and then restarted the pmm-server

<br>docker cp /tmp/pmm-managed 218004fce55c:/tmp/docker cp /tmp/pmm-managed-init 218004fce55c:/tmp/docker cp /tmp/pmm-managed-starlark 218004fce55c:/tmp/<br><br>docker exec -it 218004fce55c /bin/bashcd /usr/sbin/mv pmm-managed pmm-managed.originmv pmm-managed-init pmm-managed-init.originmv pmm-managed-starlark pmm-managed-starlark.origincp /tmp/pmm-managed* .<br>chmod +x pmm-managedchmod +x pmm-managed-initchmod +x pmm-managed-starlark<br><br>exit<br>
dokcer restart 218004fce55c




One last word, please test in the test environment first

This is the great power of Open Source - if you do not like compiled in default you can build your own version!
We surely need to provide more sane approach to do it - there are ton of reasons to modify available collectors and supply advanced options to them!