Hello!
There is a PostgreSQL 15.4 cluster with Patroni. Installed packages:
percona-haproxy/unknown,now 2:2.8.1-1.jammy amd64 [installed, can be upgraded to: 2:2.8.11-1.jammy]
percona-patroni/unknown,now 1:3.1.0-1.jammy all [installed, can be upgraded to: 1:4.0.3-1.jammy]
percona-pg-stat-monitor15/unknown,now 1:2.0.1-3.jammy amd64 [installed, can be upgraded to: 1:2.1.0-1.jammy]
percona-pgbackrest/unknown,now 1:2.47-1.jammy amd64 [installed, can be upgraded to: 1:2.54.0-1.jammy]
percona-postgresql-15-pgaudit/unknown,now 1:1.7.0-6.jammy amd64 [installed, can be upgraded to: 1:1.7.0-8.jammy]
percona-postgresql-15-repack/unknown,now 1:1.4.8-3.jammy amd64 [installed, can be upgraded to: 1:1.5.1-1.jammy]
percona-postgresql-15-wal2json/unknown,now 1:2.5-5.jammy amd64 [installed, can be upgraded to: 1:2.6-2.jammy]
percona-postgresql-15/unknown,now 2:15.4-1.jammy amd64 [installed, can be upgraded to: 2:15.10-1.jammy]
percona-postgresql-client-15/unknown,now 2:15.4-1.jammy amd64 [installed, can be upgraded to: 2:15.10-1.jammy]
percona-postgresql-common/unknown,now 1:252-1.jammy all [installed, can be upgraded to: 1:266-1.jammy]
percona-postgresql-contrib/unknown,now 1:252-1.jammy all [installed, can be upgraded to: 1:266-1.jammy]
percona-ppg-server-15/unknown,now 1:15.4-1.jammy amd64 [installed, can be upgraded to: 1:15.10-1.jammy]
percona-ppg-server-ha-15/unknown,now 1:15.4-1.jammy amd64 [installed, can be upgraded to: 1:15.10-1.jammy]
percona-release/unknown,now 1.0-27.generic all [installed, can be upgraded to: 1.0-29.generic]
I upgraded the percona-pg-stat-monitor15 package to version 2.1.0 and after restarting PostgreSQL.
checked the extension version:
postgres=# SELECT pg_stat_monitor_version();
pg_stat_monitor_version
-------------------------
2.1.0
(1 line)
After this, errors and failures in PostgreSQL began, leading to a restart and recovery.
Log:
2025-01-21 12:28:02
"2025-01-21 12:28:02.365 MSK PID:[10814] TR:[0] postgres@postgres/[n/a] SQLSTATE:[57P03] "IMPORTANT: Database system in recovery mode
2025-01-21 12:28:01
"2025-01-21 12:28:01.688 MSK PID:[10809] TR:[0] @/ SQLSTATE:[00000] "HINT: If this happens repeatedly, some data may have been corrupted and you should select an earlier point to restore to.
2025-01-21 12:28:01
"2025-01-21 12:28:01.688 MSK PID:[10809] TR:[0] @/ SQLSTATE:[00000] "MESSAGE: The DB system was interrupted during recovery, log time: 2025-01-21 12:10:07 MSK
2025-01-21 12:28:00
"2025-01-21 12:28:00.454 MSK PID:[1693] TR:[0] @/ SQLSTATE:[00000] "MESSAGE: [pg_stat_monitor] pgsm_shmem_shutdown: Shutdown initiated.
2025-01-21 12:28:00
"2025-01-21 12:28:00.451 MSK PID:[1693] TR:[0] @/ SQLSTATE:[00000] "MESSAGE: All server processes terminated... reinitializing
2025-01-21 12:28:00
"2025-01-21 12:28:00.001 MSK PID:[10808] TR:[0] pmm@postgres/[n/a] SQLSTATE:[57P03] "IMPORTANT: Database system in recovery mode
2025-01-21 12:28:00
"2025-01-21 12:27:59.995 MSK PID:[1693] TR:[0] @/ SQLSTATE:[00000] "MESSAGE: Terminating all remaining active server processes
2025-01-21 12:28:00
"2025-01-21 12:27:59.995 MSK PID:[1693] TR:[0] @/ SQLSTATE:[00000] "DETAILS: The process that terminated was executing the action: SELECT /* agent='pgstatmonitor' */ "pg_stat_monitor"."bucket", "pg_stat_monitor"."client_ip", "pg_stat_monitor"."query", "pg_stat_monitor"."calls", "pg_stat_monitor"."shared_blks_hit", "pg_stat_monitor"."shared_blks_read", "pg_stat_monitor"."shared_blks_dirtied", "pg_stat_monitor"."shared_blks_written", "pg_stat_monitor"."local_blks_hit", "pg_stat_monitor"."local_blks_read", "pg_stat_monitor"."local_blks_dirtied", "pg_stat_monitor"."local_blks_written", "pg_stat_monitor"."temp_blks_read", "pg_stat_monitor"."temp_blks_written", "pg_stat_monitor"."blk_read_time", "pg_stat_monitor"."blk_write_time", "pg_stat_monitor"."resp_calls", "pg_stat_monitor"."cpu_user_time", "pg_stat_monitor"."cpu_sys_time", "pg_stat_monitor"."rows", "pg_stat_monitor"."relations", "pg_stat_monitor"."datname", "pg_stat_monitor"."userid", "pg_stat_monitor"."top_queryid", "pg_stat_monitor"."planid", "pg_stat_monitor"."query_plan", "pg_stat_monitor"."top_query", "pg_stat_monitor"."application_name", "pg_stat_monitor"."cmd_type", "pg_stat_mon
2025-01-21 12:28:00
"2025-01-21 12:27:59.995 MSK PID:[1693] TR:[0] @/ SQLSTATE:[00000] "MESSAGE: The server process (PID 10806) was terminated by signal 11: Segmentation fault
The recovery process begins and everything starts all over again.
Finally installed the old release:
apt-get install percona-pg-stat-monitor15=1:2.0.1-3.jammy
Restarted PostgreSQL and checked the extension version:
postgres=# SELECT pg_stat_monitor_version();
pg_stat_monitor_version
-------------------------
2.0.1
(1 line)
What is the problem? Is it safe to update to 2.1.0?