Errors in collecting metrics under high database load

Hello!
I installed PMM server version 3. On a highly loaded test server running PostgreSQL version 14, I installed the PMM agent version 3 and pg_stat_monitor 2.2. Metrics are being received.

Agent settings:
pg_stat_monitor.pgsm_bucket_time: 60
pg_stat_monitor.pgsm_enable_overflow: true
pg_stat_monitor.pgsm_enable_pgsm_query_id: false
pg_stat_monitor.pgsm_enable_query_plan: true
pg_stat_monitor.pgsm_histogram_max: 30000
pg_stat_monitor.pgsm_histogram_min: 50
pg_stat_monitor.pgsm_max: 512
pg_stat_monitor.pgsm_max_buckets: 20
pg_stat_monitor.pgsm_normalized_query: true
pg_stat_monitor.pgsm_query_shared_buffer: 384
pg_stat_monitor.pgsm_track_application_names: false
pg_stat_monitor.pgsm_track_planning: false

Under high load on PostgreSQL, when QPS exceeds 10K, errors occur:
caller=postgres_exporter.go:770 level=error err=“Error opening connection to database (postgres://sa_percona_monitoring:PASSWORD_REMOVED@localhost:5432/base_1?connect_timeout=1&sslmode=disable): read tcp 127.0.0.1:34926->127.0.0.1:5432: i/o timeout”" agentID=25836318-331d-4630-9cf8-7b739b9c5f4f component=agent-process type=postgres_exporter
caller=postgres_exporter.go:770 level=error err=“Error opening connection to database (postgres://sa_percona_monitoring:PASSWORD_REMOVED@localhost:5432/base_2?connect_timeout=1&sslmode=disable): read tcp 127.0.0.1:34944->127.0.0.1:5432: i/o timeout”" agentID=25836318-331d-4630-9cf8-7b739b9c5f4f component=agent-process type=postgres_exporter

Additionally, there are gaps in the graphs. PostgreSQL metrics are missing in PMM. At the moment when the metrics from pg_stat_monitor disappear, work with PostgreSQL continues - the services send requests and receive responses from the database.

The following messages are appearing in the PostgreSQL logs:
LOG: temporary file: path “base/pgsql_tmp/pgsql_tmp3145589.7”, size 314170608
STATEMENT: SELECT /* agent=‘pgstatmonitor’ */ “pg_stat_monitor”.“bucket”, “pg_stat_monitor”.“client_ip”, “pg_stat_monitor”.“query”, “pg_stat_monitor”.“calls”, “pg_stat_monitor”.“shared_blks_hit”, “pg_stat_monitor”.“shared_blks_read”, “pg_stat_monitor”.“shared_blks_dirtied”, “pg_stat_monitor”.“shared_blks_written”, “pg_stat_monitor”.“local_blks_hit”, “pg_stat_monitor”.“local_blks_read”, “pg_stat_monitor”.“local_blks_dirtied”, “pg_stat_monitor”.“local_blks_written”, “pg_stat_monitor”.“temp_blks_read”, “pg_stat_monitor”.“temp_blks_written”, “pg_stat_monitor”.“resp_calls”, “pg_stat_monitor”.“cpu_user_time”, “pg_stat_monitor”.“cpu_sys_time”, “pg_stat_monitor”.“rows”, “pg_stat_monitor”.“relations”, “pg_stat_monitor”.“datname”, “pg_stat_monitor”.“userid”, “pg_stat_monitor”.“top_queryid”, “pg_stat_monitor”.“planid”, “pg_stat_monitor”.“query_plan”, “pg_stat_monitor”.“top_query”, “pg_stat_monitor”.“application_name”, “pg_stat_monitor”.“cmd_type”, “pg_stat_monitor”.“cmd_type_text”, “pg_stat_monitor”.“elevel”, “pg_stat_monitor”.“sqlcode”, “pg_stat_monitor”.“message”, “pg_stat_monitor”.“pgsm_query_id”, “pg_stat_monitor”.“dbid”, “pg_stat_monitor”.“blk_read_time”, “pg_stat_monitor”.“blk_write_time”, “pg_stat_monitor”.“total_exec_time”, “pg_stat_monitor”.“min_exec_time”, “pg_stat_monitor”.“max_exec_time”, “pg_stat_monitor”.“mean_exec_time”, “pg_stat_monitor”.“stddev_exec_time”, “pg_stat_monitor”.“total_plan_time”, “pg_stat_monitor”.“min_plan_time”, “pg_stat_monitor”.“max_plan_time”, “pg_stat_monitor”.“mean_plan_time”, “pg_stat_monitor”.“wal_records”, “pg_stat_monitor”.“wal_fpi”, “pg_stat_monitor”.“wal_bytes”, “pg_stat_monitor”.“plans”, “pg_stat_monitor”.“comments”, “pg_stat_monitor”.“bucket_start_time”, “pg_stat_monitor”.“username” FROM “pg_stat_monitor” WHERE queryid IS NOT NULL AND query IS NOT NULL AND bucket_done AND pgsm_query_id IS NOT NULL
WARNING: [pg_stat_monitor] pgsm_store: Hash table is out of memory and can no longer store queries!
DETAIL: You may reset the view or when the buckets are deallocated, pg_stat_monitor will resume saving queries. Alternatively, try increasing the value of pg_stat_monitor.pgsm_max.
WARNING: [pg_stat_monitor] pg_stat_monitor_internal: Hash table is out of memory and can no longer store queries!
DETAIL: You may reset the view or when the buckets are deallocated, pg_stat_monitor will resume saving queries. Alternatively, try increasing the value of pg_stat_monitor.pgsm_max.

My issue is very similar to this one: https://perconadev.atlassian.net/browse/PMM-8646

How can I increase the timeout for getting a response from PostgreSQL for postgres_exporter?