Not the answer you need?
Register and ask your own question!
Many Forum changes were implemented on Tue 22 Sep. Read about new Ranks, Scoring, and Reactions.
Email [email protected] for any comments or concerns.

How to change modify pmm2 client configuration, disable some collect stats

It appears that pmm2 client is spamming /var/log/messages with exceptions. Is there any way to control which stats are being collected my pmm-agent for mysql:

/var/log/messages:
Apr 28 07:39:33 dev-evvio-db01 pmm-agent: #033[36mINFO#033[0m[2020-04-28T11:39:33.764+00:00] time="2020-04-28T11:39:33Z" level=error msg="Error scraping for collect.engine_tokudb_status: Error 1286: Unknown storage engine 'TOKUDB'" source="exporter.go:116"  #033[36magentID#033[0m=/agent_id/e7621b6f-5138-4612-ad2c-a975d3bc0af8 #033[36mcomponent#033[0m=agent-process #033[36mtype#033[0m=mysqld_exporter
Apr 28 07:39:33 dev-evvio-db01 pmm-agent: #033[36mINFO#033[0m[2020-04-28T11:39:33.781+00:00] time="2020-04-28T11:39:33Z" level=error msg="Error scraping for collect.heartbeat: Error 1146: Table 'heartbeat.heartbeat' doesn't exist" source="exporter.go:116"  #033[36magentID#033[0m=/agent_id/e7621b6f-5138-4612-ad2c-a975d3bc0af8 #033[36mcomponent#033[0m=agent-process #033[36mtype#033[0m=mysqld_exporter
Apr 28 07:40:00 dev-evvio-db01 pmm-agent: #033[36mINFO#033[0m[2020-04-28T11:40:00.071+00:00] Sending 32 buckets.                           #033[36magentID#033[0m=/agent_id/f10f8686-3b62-48f4-926d-798c9aef58a8 #033[36mcomponent#033[0m=agent-builtin #033[36mtype#033[0m=qan_mysql_perfschema_agent

The mysql_exporter process is clearly attempting to collect the following stats, including the ones that are failing: 

--collect.heartbeat
--collect.engine_tokudb_status

# ps aux | grep mysqld_exporter
root      1546  0.8  0.2 114640 21396 ?        Sl   10:31   0:39 /usr/local/percona/pmm2/exporters/mysqld_exporter --collect.auto_increment.columns --collect.binlog_size --collect.custom_query.hr --collect.custom_query.hr.directory=/usr/local/percona/pmm2/collectors/custom-queries/mysql/high-resolution --collect.custom_query.lr --collect.custom_query.lr.directory=/usr/local/percona/pmm2/collectors/custom-queries/mysql/low-resolution --collect.custom_query.mr --collect.custom_query.mr.directory=/usr/local/percona/pmm2/collectors/custom-queries/mysql/medium-resolution --collect.engine_innodb_status --collect.engine_tokudb_status --collect.global_status --collect.global_variables --collect.heartbeat --collect.info_schema.clientstats --collect.info_schema.innodb_cmp --collect.info_schema.innodb_cmpmem --collect.info_schema.innodb_metrics --collect.info_schema.innodb_tablespaces --collect.info_schema.processlist --collect.info_schema.query_response_time --collect.info_schema.tables --collect.info_schema.tablestats --collect.info_schema.userstats --collect.perf_schema.eventsstatements --collect.perf_schema.eventswaits --collect.perf_schema.file_events --collect.perf_schema.file_instances --collect.perf_schema.indexiowaits --collect.perf_schema.tableiowaits --collect.perf_schema.tablelocks --collect.slave_status --collect.standard.go --collect.standard.process --exporter.conn-max-lifetime=55s --exporter.global-conn-pool --exporter.max-idle-conns=3 --exporter.max-open-conns=3 --web.listen-address=:42001
root      6111  0.0  0.0 112712   988 pts/0    S+   11:45   0:00 grep --color=auto mysqld_exporter


Best Answer

  • steve.hoffmansteve.hoffman Percona Percona Staff Role
    Accepted Answer
    @lvit01this is actually a known issue: https://jira.percona.com/browse/PMM-4665 but hasn't been picked up for development yet.  I checked my instance and I too am seeing the errors on 2.4 (client and server).  If you don't mind registering for Jira and adding a comment it will help it bubble up in the priority list.  

Answers

  • lvit01lvit01 Current User Role Contributor
    Thanks Steve. Yes, I will add a comment in Jira as well.
  • FanFan Contributor Current User Role Patron
    edited June 18
    I have similar needs recently, and I want to remove the -collect.stats_memory_metrics parameter of proxysql_exporter.
    I finally found that the pmm-agent of the monitored server actually went back to pmm-server to get the exporter parameters and then started the exporter

    ```
    /usr/sbin/pmm-agent --config-file=/usr/local/percona/pmm2/config/pmm-agent.yaml --debug

    DEBU[2020-06-15T14:49:22.992+08:00] Logs redactor disabled in debug mode.         agentID=/agent_id/b1177726-6318-42ec-a4f4-893eb478a9e5 component=agent-process type=node_exporter
    DEBU[2020-06-15T14:49:22.992+08:00] Starting: /usr/local/percona/pmm2/exporters/proxysql_exporter -collect.mysql_connection_list -collect.mysql_connection_pool -collect.mysql_status -collect.stats_memory_metrics -web.listen-address=:42001 (environment: DATA_SOURCE_NAME=stats:[email protected](127.0.0.1:6032)/?timeout=1s, HTTP_AUTH=pmm:/agent_id/c118aa18-3201-41e7-80b8-54c988eea699).  agentID=/agent_id/c118aa18-3201-41e7-80b8-54c988eea699 component=agent-process type=proxysql_exporter
    DEBU[2020-06-15T14:49:22.992+08:00] Logs redactor disabled in debug mode.         agentID=/agent_id/c118aa18-3201-41e7-80b8-54c988eea699 component=agent-process type=proxysql_exporter
    INFO[2020-06-15T14:49:22.992+08:00] Sending status: STARTING (port 42000).        agentID=/agent_id/b1177726-6318-42ec-a4f4-893eb478a9e5 component=agent-process type=node_exporter






    DEBU[2020-06-15T14:49:22.991+08:00] Received message (3003 bytes):
    id: 1
    set_state: <
      agent_processes: <
        key: "/agent_id/b1177726-6318-42ec-a4f4-893eb478a9e5"
        value: <
          type: NODE_EXPORTER
          template_left_delim: "{{"
          template_right_delim: "}}"
          args: "--collector.bonding"
          args: "--collector.buddyinfo"
          args: "--collector.cpu"
          args: "--collector.diskstats"
          args: "--collector.entropy"
          args: "--collector.filefd"
          args: "--collector.filesystem"
          args: "--collector.hwmon"
          args: "--collector.loadavg"
          args: "--collector.meminfo"
          args: "--collector.meminfo_numa"
          args: "--collector.netdev"
          args: "--collector.netstat"
          args: "--collector.netstat.fields=^(.*_(InErrors|InErrs|InCsumErrors)|Tcp_(ActiveOpens|PassiveOpens|RetransSegs|CurrEstab|AttemptFails|OutSegs|InSegs|EstabResets|OutRsts|OutSegs)|Tcp_Rto(Algorithm|Min|Max)|Udp_(RcvbufErrors|SndbufErrors)|Udp(6?|Lite6?)_(InDatagrams|OutDatagrams|RcvbufErrors|SndbufErrors|NoPorts)|Icmp6?_(OutEchoReps|OutEchos|InEchos|InEchoReps|InAddrMaskReps|InAddrMasks|OutAddrMaskReps|OutAddrMasks|InTimestampReps|InTimestamps|OutTimestampReps|OutTimestamps|OutErrors|InDestUnreachs|OutDestUnreachs|InTimeExcds|InRedirects|OutRedirects|InMsgs|OutMsgs)|IcmpMsg_(InType3|OutType3)|Ip(6|Ext)_(InOctets|OutOctets)|Ip_Forwarding|TcpExt_(Listen.*|Syncookies.*|TCPTimeouts))$"
          args: "--collector.processes"
          args: "--collector.standard.go"
          args: "--collector.standard.process"
          args: "--collector.stat"
          args: "--collector.textfile.directory.hr=/usr/local/percona/pmm2/collectors/textfile-collector/high-resolution"
          args: "--collector.textfile.directory.lr=/usr/local/percona/pmm2/collectors/textfile-collector/low-resolution"
          args: "--collector.textfile.directory.mr=/usr/local/percona/pmm2/collectors/textfile-collector/medium-resolution"
          args: "--collector.textfile.hr"
          args: "--collector.textfile.lr"
          args: "--collector.textfile.mr"
          args: "--collector.time"
          args: "--collector.uname"
          args: "--collector.vmstat"
          args: "--collector.vmstat.fields=^(pg(steal_(kswapd|direct)|refill|alloc)_(movable|normal|dma3?2?)|nr_(dirty.*|slab.*|vmscan.*|isolated.*|free.*|shmem.*|i?n?active.*|anon_transparent_.*|writeback.*|unstable|unevictable|mlock|mapped|bounce|page_table_pages|kernel_stack)|drop_slab|slabs_scanned|pgd?e?activate|pgpg(in|out)|pswp(in|out)|pgm?a?j?fault)$"
          args: "--no-collector.arp"
          args: "--no-collector.bcache"
          args: "--no-collector.conntrack"
          args: "--no-collector.drbd"
          args: "--no-collector.edac"
          args: "--no-collector.infiniband"
          args: "--no-collector.interrupts"
          args: "--no-collector.ipvs"
          args: "--no-collector.ksmd"
          args: "--no-collector.logind"
          args: "--no-collector.mdadm"
          args: "--no-collector.mountstats"
          args: "--no-collector.netclass"
          args: "--no-collector.nfs"
          args: "--no-collector.nfsd"
          args: "--no-collector.ntp"
          args: "--no-collector.qdisc"
          args: "--no-collector.runit"
          args: "--no-collector.sockstat"
          args: "--no-collector.supervisord"
          args: "--no-collector.systemd"
          args: "--no-collector.tcpstat"
          args: "--no-collector.timex"
          args: "--no-collector.wifi"
          args: "--no-collector.xfs"
          args: "--no-collector.zfs"
          args: "--web.disable-exporter-metrics"
          args: "--web.listen-address=:{{ .listen_port }}"
          env: "HTTP_AUTH=pmm:/agent_id/b1177726-6318-42ec-a4f4-893eb478a9e5"
        >
      >
      agent_processes: <
        key: "/agent_id/c118aa18-3201-41e7-80b8-54c988eea699"
        value: <
          type: PROXYSQL_EXPORTER
          template_left_delim: "{{"
          template_right_delim: "}}"
          args: "-collect.mysql_connection_list"
          args: "-collect.mysql_connection_pool"
          args: "-collect.mysql_status"
          args: "-collect.stats_memory_metrics"
          args: "-web.listen-address=:{{ .listen_port }}"
          env: "DATA_SOURCE_NAME=stats:[email protected](127.0.0.1:6032)/?timeout=1s"
          env: "HTTP_AUTH=pmm:/agent_id/c118aa18-3201-41e7-80b8-54c988eea699"
          redact_words: "stats"
        >
      >
    >
      component=channel
    ```

    see 
    https://github.com/percona/pmm-managed/blob/v2.4.0/services/agents/mysql.go#L45

    So there is no particularly good way, I finally replaced the pmm-managed in the docker container with the pmm-managed recompiled by modifying the code and then restarted the pmm-server

    ```
    docker cp /tmp/pmm-managed 218004fce55c:/tmp/
    docker cp /tmp/pmm-managed-init 218004fce55c:/tmp/
    docker cp /tmp/pmm-managed-starlark 218004fce55c:/tmp/

    docker exec -it 218004fce55c /bin/bash
    cd /usr/sbin/
    mv pmm-managed pmm-managed.origin
    mv pmm-managed-init pmm-managed-init.origin
    mv pmm-managed-starlark pmm-managed-starlark.origin
    cp /tmp/pmm-managed* .

    chmod +x pmm-managed
    chmod +x pmm-managed-init
    chmod +x pmm-managed-starlark

    exit
    ```
    dokcer restart 218004fce55c




    One last word, please test in the test environment first
  • PeterPeter Percona CEO Percona Moderator Role
    This is the great power of Open Source - if you do not like compiled in default you can build your own version!
    We surely need to provide more sane approach to do it - there are ton of reasons to modify available collectors and supply advanced options to them!
Sign In or Register to comment.

MySQL, InnoDB, MariaDB and MongoDB are trademarks of their respective owners.
Copyright ©2005 - 2020 Percona LLC. All rights reserved.