citus wants to be the first in the line with shared_preload_libraries in postgresql.conf
and if this is not so, then the PG does not start with such a message:
2023-10-09 14:06:16,761 INFO: starting as a secondary
2023-10-09 14:06:16.921 UTC  LOG: pgaudit extension initialized
2023-10-09 14:06:16,921 INFO: postmaster pid=354
2023-10-09 14:06:16.923 UTC  FATAL: Citus has to be loaded first
2023-10-09 14:06:16.923 UTC  HINT: Place citus at the beginning of shared_preload_libraries.
2023-10-09 14:06:16.923 UTC  LOG: database system is shut down
Steps to Reproduce:
probably just write to your cr.yaml
The problem is that the operator himself writes “his” extensions first in this line of the config.
In this case, the cluster configmap contains the line:
but postgresql.conf contains:
bash-4.4$ grep 'citus' /pgdata/pg13/postgresql.conf
shared_preload_libraries = 'pg_stat_monitor,pgaudit,citus,pg_cron,pg_stat_monitor,pgaudit,pg_stat_statements'
how to fix this behavior?
Without referring to our in house K8s experts offhand I’d suggest rewriting using ALTER SYSTEM …
Try it out and see if this works for you.
Thanks for the idea, it will help get out now.
For those who may be trying now, you can proceed like this:
We don’t mention citus in the shared_preload_libraries on cr.yaml
After the start, we look at what we now have in the shared_preload_libraries
make an ALTER SYSTEM… with citus in the first position in the line.
There are two points here:
currently the operator prescribes “their” shared_preload_libraries first ( pg_stat_monitor,pgaudit), although they don’t need this and it’s probably better to change this behavior.
Do - first what is indicated in the manifesto cr.yaml after yours
there is still hope for Patroni, which seems to be from version 3, namely it in PGO v2.2, knows about citus and allows you to make the appropriate settings and then Patroni will be the first to register citus itself in the shared_preload_libraries. Well, besides this, it will give some new opportunities for the collaboration of Patroni and citus
We’re going to try the last one
Doing a followup: I checked with our people and an internal JIRA ticket has been generated assigning this issue as a bug.
there is still hope for Patroni …
This sounds good. Keep me in the loop and tell me how it goes on your end of things.
Okay - I’ll let you know
On the one hand, patroni may not know anything about the citus cluster, but only engage in their own standbys.
Then the citus cluster can be assembled approximately as in the manual for individual database servers.
But the author of patroni made sure that patroni knew about the citus cluster and could better show the state of the citus cluster as a whole and do some service things that would otherwise need to be done manually. Add a node to the cluster, etc.
It’s described here:
There is a concern that without the help of Percona specialists we will be able to implement what is described in the Citus on Kubernetes section
But we are moving) – join us.
If a operator can deploy and manage a citus cluster, providing horizontal scaling, that would be really cool.
It’s unlikely that any operator can do this now)