Can the PMM Server be deployed on k8s as a Deployment object?

Hi, I’m trying to deploy the PMM Server on k8s as a Deployment using this sample file:https://raw.githubusercontent.com/percona-platform/dbaas-controller/main/deploy/pmm-server-minikube.yaml
I set replicas to one.

But I encountered following errors when I deleted the pmm server pod.

[root@pmmserver-sample-b7dd8bccb-kzfcw logs]# tail -f postgresql.log         
2021-07-05 00:34:37.832 UTC [118] LOG:  database system was interrupted; last known up at 2021-07-05 00:30:59 UTC
2021-07-05 00:34:37.832 UTC [119] FATAL:  the database system is starting up
2021-07-05 00:34:37.834 UTC [123] FATAL:  the database system is starting up
2021-07-05 00:34:38.210 UTC [118] LOG:  database system was not properly shut down; automatic recovery in progress
2021-07-05 00:34:38.284 UTC [118] LOG:  redo starts at 0/16E74E0
2021-07-05 00:34:38.303 UTC [118] LOG:  invalid record length at 0/17242B8: wanted 24, got 0
2021-07-05 00:34:38.303 UTC [118] LOG:  redo done at 0/1724070
2021-07-05 00:34:38.303 UTC [118] LOG:  last completed transaction was at log time 2021-07-05 00:34:30.612966+00
2021-07-05 00:34:38.610 UTC [130] FATAL:  the database system is starting up
2021-07-05 00:34:39.444 UTC [42] LOG:  database system is ready to accept connections
2021-07-05 00:35:39.589 UTC [42] LOG:  lock file "postmaster.pid" contains wrong PID: 606 instead of 42
2021-07-05 00:35:39.589 UTC [42] LOG:  performing immediate shutdown because data directory lock file is invalid
2021-07-05 00:35:39.589 UTC [42] LOG:  received immediate shutdown request
2021-07-05 00:35:39.713 UTC [454] WARNING:  terminating connection because of crash of another server process
2021-07-05 00:35:39.713 UTC [454] DETAIL:  The postmaster has commanded this server process to roll back the current transaction and exit, because another server process exited abnormally and possibly corrupted shared memory.
2021-07-05 00:35:39.713 UTC [454] HINT:  In a moment you should be able to reconnect to the database and repeat your command.
2021-07-05 00:35:39.713 UTC [469] WARNING:  terminating connection because of crash of another server process
2021-07-05 00:35:39.713 UTC [469] DETAIL:  The postmaster has commanded this server process to roll back the current transaction and exit, because another server process exited abnormally and possibly corrupted shared memory.
2021-07-05 00:35:39.713 UTC [469] HINT:  In a moment you should be able to reconnect to the database and repeat your command.
2021-07-05 00:35:39.713 UTC [158] WARNING:  terminating connection because of crash of another server process
2021-07-05 00:35:39.713 UTC [158] DETAIL:  The postmaster has commanded this server process to roll back the current transaction and exit, because another server process exited abnormally and possibly corrupted shared memory.
2021-07-05 00:35:39.713 UTC [158] HINT:  In a moment you should be able to reconnect to the database and repeat your command.
2021-07-05 00:35:39.714 UTC [452] WARNING:  terminating connection because of crash of another server process
2021-07-05 00:35:39.714 UTC [452] DETAIL:  The postmaster has commanded this server process to roll back the current transaction and exit, because another server process exited abnormally and possibly corrupted shared memory.
2021-07-05 00:35:39.714 UTC [452] HINT:  In a moment you should be able to reconnect to the database and repeat your command.
2021-07-05 00:35:39.714 UTC [200] WARNING:  terminating connection because of crash of another server process
2021-07-05 00:35:39.714 UTC [200] DETAIL:  The postmaster has commanded this server process to roll back the current transaction and exit, because another server process exited abnormally and possibly corrupted shared memory.
2021-07-05 00:35:39.714 UTC [200] HINT:  In a moment you should be able to reconnect to the database and repeat your command.
2021-07-05 00:35:39.716 UTC [187] WARNING:  terminating connection because of crash of another server process
2021-07-05 00:35:39.716 UTC [187] DETAIL:  The postmaster has commanded this server process to roll back the current transaction and exit, because another server process exited abnormally and possibly corrupted shared memory.
2021-07-05 00:35:39.716 UTC [187] HINT:  In a moment you should be able to reconnect to the database and repeat your command.
2021-07-05 00:35:39.713 UTC [473] WARNING:  terminating connection because of crash of another server process
2021-07-05 00:35:39.713 UTC [473] DETAIL:  The postmaster has commanded this server process to roll back the current transaction and exit, because another server process exited abnormally and possibly corrupted shared memory.
2021-07-05 00:35:39.713 UTC [473] HINT:  In a moment you should be able to reconnect to the database and repeat your command.
2021-07-05 00:35:39.719 UTC [155] WARNING:  terminating connection because of crash of another server process
2021-07-05 00:35:39.719 UTC [155] DETAIL:  The postmaster has commanded this server process to roll back the current transaction and exit, because another server process exited abnormally and possibly corrupted shared memory.
2021-07-05 00:35:39.719 UTC [155] HINT:  In a moment you should be able to reconnect to the database and repeat your command.
2021-07-05 00:35:39.749 UTC [42] LOG:  database system is shut down
2021-07-05 00:35:40.151 UTC [578] LOG:  database system was interrupted; last known up at 2021-07-05 00:34:39 UTC
2021-07-05 00:35:40.484 UTC [578] LOG:  invalid resource manager ID in primary checkpoint record
2021-07-05 00:35:40.484 UTC [578] PANIC:  could not locate a valid checkpoint record
2021-07-05 00:35:40.652 UTC [576] LOG:  startup process (PID 578) was terminated by signal 6: Aborted
2021-07-05 00:35:40.652 UTC [576] LOG:  aborting startup due to startup process failure
2021-07-05 00:35:40.653 UTC [576] LOG:  database system is shut down
2021-07-05 00:35:42.038 UTC [581] LOG:  database system was interrupted; last known up at 2021-07-05 00:34:39 UTC
2021-07-05 00:35:42.359 UTC [581] LOG:  invalid resource manager ID in primary checkpoint record
2021-07-05 00:35:42.359 UTC [581] PANIC:  could not locate a valid checkpoint record
2021-07-05 00:35:42.517 UTC [579] LOG:  startup process (PID 581) was terminated by signal 6: Aborted
2021-07-05 00:35:42.517 UTC [579] LOG:  aborting startup due to startup process failure
2021-07-05 00:35:42.518 UTC [579] LOG:  database system is shut down
2021-07-05 00:35:45.441 UTC [586] LOG:  database system was interrupted; last known up at 2021-07-05 00:34:39 UTC
2021-07-05 00:35:45.767 UTC [586] LOG:  invalid resource manager ID in primary checkpoint record
2021-07-05 00:35:45.767 UTC [586] PANIC:  could not locate a valid checkpoint record
2021-07-05 00:35:45.913 UTC [584] LOG:  startup process (PID 586) was terminated by signal 6: Aborted
2021-07-05 00:35:45.913 UTC [584] LOG:  aborting startup due to startup process failure
2021-07-05 00:35:45.915 UTC [584] LOG:  database system is shut down
2021-07-05 00:35:49.375 UTC [589] LOG:  database system was interrupted; last known up at 2021-07-05 00:34:39 UTC
2021-07-05 00:35:49.656 UTC [589] LOG:  invalid resource manager ID in primary checkpoint record
2021-07-05 00:35:49.656 UTC [589] PANIC:  could not locate a valid checkpoint record
2021-07-05 00:35:49.818 UTC [587] LOG:  startup process (PID 589) was terminated by signal 6: Aborted
2021-07-05 00:35:49.818 UTC [587] LOG:  aborting startup due to startup process failure
2021-07-05 00:35:49.819 UTC [587] LOG:  database system is shut down
2021-07-05 00:35:54.488 UTC [592] LOG:  database system was interrupted; last known up at 2021-07-05 00:34:39 UTC
2021-07-05 00:35:54.775 UTC [592] LOG:  invalid resource manager ID in primary checkpoint record
2021-07-05 00:35:54.775 UTC [592] PANIC:  could not locate a valid checkpoint record
2021-07-05 00:35:54.894 UTC [590] LOG:  startup process (PID 592) was terminated by signal 6: Aborted
2021-07-05 00:35:54.894 UTC [590] LOG:  aborting startup due to startup process failure
2021-07-05 00:35:54.895 UTC [590] LOG:  database system is shut down

So, I have checked the POD status and I found that there two pod at the moment.

default   pmmserver-sample-b7dd8bccb-jzztt                    2/2     Running       0          7s
default   pmmserver-sample-b7dd8bccb-kzfcw                    2/2     Terminating   0          4m45s

Can the PMM Server be deployed on k8s as a Deployment object?

Right now pmm-server does not support deployment in kubernetes, but we have this on the roadmap

1 Like