We are exploring migrating our application to use Percona postgreql due to the high availability solution.
We currently have the following requirements:
- Our existing solution is using timescaledb and postgis.
- We need to be able to roll out clustered environments quickly with automation being an integral part of the solution.
- We run the timescaledb-tune command on initial startup to configure the memory, worker threads etc.
- We currently use a docker image for doing local development and using test containers for integration tests. It would be good to be able to run our new production database using the same docker image.
- On initial database startup we have a script where we enable all the necessary extensions such as timescaledb, pg_stat_statements and postgis as well as set database default privileges etc.
I have been playing around with percona and created a custom docker image that installed timescaledb, postgis as well as the percona server and some of the extensions but not the ones for the HA solution (i.e. patroni, haproxy, etcd etc).
The docker container then is started up using the config files that have been updated by the timescaledb-tune process.
I was hoping that I could install the HA extensions on the host and connect to the docker container running on the host network, but I’ve noticed that the percona-patroni package initialises a new database when it is installed.
Is there a way to have percona-patroni bypass the installation of a new database and I could modify the patroni config to use the running docker postgresql process?
Or should I try installing patroni independently of percona?