Percona HA solution with postgres running in docker

We are exploring migrating our application to use Percona postgreql due to the high availability solution.

We currently have the following requirements:

  • Our existing solution is using timescaledb and postgis.
  • We need to be able to roll out clustered environments quickly with automation being an integral part of the solution.
  • We run the timescaledb-tune command on initial startup to configure the memory, worker threads etc.
  • We currently use a docker image for doing local development and using test containers for integration tests. It would be good to be able to run our new production database using the same docker image.
  • On initial database startup we have a script where we enable all the necessary extensions such as timescaledb, pg_stat_statements and postgis as well as set database default privileges etc.

I have been playing around with percona and created a custom docker image that installed timescaledb, postgis as well as the percona server and some of the extensions but not the ones for the HA solution (i.e. patroni, haproxy, etcd etc).
The docker container then is started up using the config files that have been updated by the timescaledb-tune process.
I was hoping that I could install the HA extensions on the host and connect to the docker container running on the host network, but I’ve noticed that the percona-patroni package initialises a new database when it is installed.

Is there a way to have percona-patroni bypass the installation of a new database and I could modify the patroni config to use the running docker postgresql process?

Or should I try installing patroni independently of percona?

I have created an environment in docker similar to what you are describing. Although I do not use or implement the postgis extension It should be easy to do with the Dockerfile and entrypoint script.

This is running patroni with etcd and a few other packages. I would suggest you look at my repo here GitHub - jtorral/DockerPgHa: Multi node multi dc simulation docker deploy of postgres, patroni, etcd and pgbackrest and look at the Docker files and entrypoint scripts for each package. It may give you the info you need.

Additionally this is generating a docker-compose file on the fly. It is still work in progress but it does work. For your case when generating the docker compose file, i would only simulate 1 data center with -d 1 and maybe 3 patroni nodes with -c 3 Read the TLDR. its pretty straight forward and may offer you some ideas.

Thanks Jorge, I took a bit of a look at it and it looks pretty impressive, thanks!.

We’ve decided to go down a different path for our production deploy and not use docker but this might be useful for testing in our local dev environments.