Weird error about scraping mysqld_exporter

Getting these errors periodically in /var/log/messages:

Jun  8 04:10:41 master pmm-agent[616641]: #033[36mINFO#033[0m[2022-06-08T04:10:41.807-04:00] time="2022-06-08T04:10:41-04:00" level=error msg="error encoding and sending metric family: write tcp 127.0.0.1:42002->127.0.0.1:60760: write: broken pipe\n" source="log.go:195"  #033[36magentID#033[0m=/agent_id/d2be8bdc-c0b9-4988-a16b-8fe32844e31f #033[36mcomponent#033[0m=agent-process #033[36mtype#033[0m=mysqld_exporter
Jun  8 04:10:41 master pmm-agent[616641]: #033[36mINFO#033[0m[2022-06-08T04:10:41.807-04:00] time="2022-06-08T04:10:41-04:00" level=error msg="error encoding and sending metric family: write tcp 127.0.0.1:42002->127.0.0.1:60760: write: broken pipe\n" source="log.go:195"  #033[36magentID#033[0m=/agent_id/d2be8bdc-c0b9-4988-a16b-8fe32844e31f #033[36mcomponent#033[0m=agent-process #033[36mtype#033[0m=mysqld_exporter
Jun  8 04:10:42 master pmm-agent[616641]: #033[36mINFO#033[0m[2022-06-08T04:10:42.151-04:00] 2022-06-08T08:10:42.151Z#011error#011VictoriaMetrics/lib/promscrape/scrapework.go:355#011error when scraping "http://127.0.0.1:42002/metrics?collect%5B%5D=auto_increment.columns&collect%5B%5D=binlog_size&collect%5B%5D=custom_query.lr&collect%5B%5D=engine_tokudb_status&collect%5B%5D=global_variables&collect%5B%5D=heartbeat&collect%5B%5D=info_schema.clientstats&collect%5B%5D=info_schema.innodb_tablespaces&collect%5B%5D=info_schema.tables&collect%5B%5D=info_schema.tablestats&collect%5B%5D=info_schema.userstats&collect%5B%5D=perf_schema.eventsstatements&collect%5B%5D=perf_schema.file_instances&collect%5B%5D=perf_schema.indexiowaits&collect%5B%5D=perf_schema.tableiowaits" from job "mysqld_exporter_agent_id_d2be8bdc-c0b9-4988-a16b-8fe32844e31f_lr-1m0s" with labels {agent_id="/agent_id/d2be8bdc-c0b9-4988-a16b-8fe32844e31f",agent_type="mysqld_exporter",instance="/agent_id/d2be8bdc-c0b9-4988-a16b-8fe32844e31f",job="mysqld_exporter_agent_id_d2be8bdc-c0b9-4988-a16b-8fe32844e31f_lr-1m0s",machine_id="/machine_id/f778309367dc4c1cbe531a7b7c0bc55f",node_id="/node_id/16bc2a9f-259b-4012-82a2-9275ab24c69d",node_name="master.towfowi.net",node_type="generic",service_id="/service_id/baf2e8db-f43d-4e32-9c10-be611ec70c9f",service_name="master.towfowi.net-mysql",service_type="mysql"}: cannot read Prometheus exposition data: cannot read a block of data in 0.000s: the response from "http://127.0.0.1:42002/metrics?collect%5B%5D=auto_increment.columns&collect%5B%5D=binlog_size&collect%5B%5D=custom_query.lr&collect%5B%5D=engine_tokudb_status&collect%5B%5D=global_variables&collect%5B%5D=heartbeat&collect%5B%5D=info_schema.clientstats&collect%5B%5D=info_schema.innodb_tablespaces&collect%5B%5D=info_schema.tables&collect%5B%5D=info_schema.tablestats&collect%5B%5D=info_schema.userstats&collect%5B%5D=perf_schema.eventsstatements&collect%5B%5D=perf_schema.file_instances&collect%5B%5D=perf_schema.indexiowaits&collect%5B%5D=perf_schema.tableiowaits" exceeds -promscrape.maxScrapeSize=16777216; either reduce the response size for the target or increase -promscrape.maxScrapeSize  #033[36magentID#033[0m=/agent_id/5abfcf3e-2a0a-466c-9037-5dd7625612d1 #033[36mcomponent#033[0m=agent-process #033[36mtype#033[0m=vm_agent

Any ideas?

Hi @Matt_Westfall thanks for posting your question!

This may be a very active server, and you have large response size payloads. First lets confirm by looking at how often this is occurring. Navigate to the VictoriaMetrics Agents Overview dashboard and scroll down to Scrapes p0.95 Response Size graph like this:


What are some of the peaks, greater than 16Mb?

Also what version of pmm2-client are you running?

If you want to reduce the number of metrics collected (which will reduce the response size) then you could start with disabling --disable-tablestats (note this means you need to do first pmm-admin remove mysql <mysql-service> : pmm-admin add mysql --disable-tablestats ... in order to apply the change.

2 Likes

I cannot find that dashboard anywhere.

It says I can increase the promscrape.maxscrapesize, but how/where do I do that?

[root@master ~]# pmm-agent -v
ProjectName: pmm-agent
Version: 2.28.0
PMMVersion: 2.28.0
Timestamp: 2022-05-10 16:55:33 (UTC)
FullCommit: 44d20e7bb2a0dc0e0439cdf93f1d622a55d3b5e9

1 Like

[PMM-11225] cannot read Prometheus exposition data: cannot read a block of data in 0.000s - Percona JIRA issue that track similar issue

2 Likes