ERROR: 57014: canceling statement due to user request

Description: ERROR: 57014: canceling statement due to user request

We have installed and Configured PMM for our PostgreSQL clusters and then we are encountering this error message in our cluster logs frequently.

Steps to Reproduce:

[Step-by-step instructions on how to reproduce the issue, including any specific settings or configurations]

Version:

PMM 3.6.0

Logs:

ERROR: 57014: canceling statement due to user request

2026-02-17 23:59:53.119 GMT [2661241] user=pmm_user db=postgres app=[unknown] client=10.10.12.13 LOCATION: exec_simple_query, postgres.c:1371
2026-02-17 23:59:53.261 GMT [2661275] user=[unknown] db=[unknown] app=[unknown] client=10.10.12.13 LOG: 00000: connection received: host=10.10.17.17 port=43402
2026-02-17 23:59:53.261 GMT [2661275] user=[unknown] db=[unknown] app=[unknown] client=10.10.12.13 LOCATION: BackendInitialize, backend_startup.c:228
2026-02-17 23:59:53.266 GMT [2661235] user=pmm_user db=postgres app=[unknown] client=10.10.12.13 ERROR: 57014: canceling statement due to user request
2026-02-17 23:59:53.266 GMT [2661235] user=pmm_user db=postgres app=[unknown] client=10.10.12.13 LOCATION: ProcessInterrupts, postgres.c:3465
2026-02-17 23:59:53.266 GMT [2661235] user=pmm_user db=postgres app=[unknown] client=10.10.12.13 STATEMENT: SELECT pg_database_size($1)
2026-02-17 23:59:53.760 GMT [2661277] user=[unknown] db=[unknown] app=[unknown] client=10.10.12.13 LOG: 00000: connection received: host=10.10.17.17 port=43406

Expected Result:

what is the resolution or mitigation steps to resolve the issue

Actual Result:

[What actually happened when the user encountered the issue]

Additional Information:

Hi @Bharath_K,

The key detail in your error is the text “due to user request”, not “due to statement timeout”. PostgreSQL uses distinct messages for each cause (error code 57014 = query_canceled). “User request” means the postgres_exporter client cancelled the query via the PG cancel protocol, which happens when the Prometheus scrape timeout fires before pg_database_size() finishes.

The pg_database collector runs in the high-resolution (HR) scrape, which by default has a 5-second interval and a 4.5-second scrape timeout (interval * 0.9). All HR collectors share that 4.5-second window. On large databases, pg_database_size() scans the entire data directory per database and can easily exceed this, especially if you have multiple large databases. This is a known pattern — PMM-8646 tracked a related postgres_exporter timeout/stall fix in PMM 2.23.0.

Fix: Increase the HR metrics resolution in the PMM UI (Configuration > Settings > Metrics Resolution). Raising it from 5s to, say, 15s gives 13.5 seconds of scrape timeout, which should be enough for most environments. The tradeoff is less granular metrics at the high-resolution tier.

Alternatively, if you don’t need the pg_database_size_bytes metric, you can disable the database collector entirely when adding the service:

pmm-admin add postgresql --disable-collectors=database ...

Guardrail: As a secondary measure, set a generous statement_timeout for pmm_user so PostgreSQL doesn’t also cancel queries independently:

ALTER ROLE pmm_user SET statement_timeout = '30s';

New exporter connections pick up the change immediately (no restart needed). This won’t fix the scrape timeout cancellations, but prevents the server-side timeout from adding a second source of 57014 errors.

To help narrow down the timing, could you run these on your monitored server?

-- Check current timeout and role settings
SHOW statement_timeout;
SELECT rolname, rolconfig FROM pg_roles WHERE rolname = 'pmm_user';

-- See which databases are large enough to cause slow pg_database_size()
SELECT datname, pg_size_pretty(pg_database_size(datname)) AS size
FROM pg_database WHERE datallowconn AND NOT datistemplate
ORDER BY pg_database_size(datname) DESC;