FATAL: SSL required when connecting to Percona Everest postgres engine

Hi All,

Have anyone faced this error when connecting to the postgresql after created it with Percona Everest?

psql -h 10.28.130.243 -U postgres
psql: error: connection to server at “10.28.130.243”, port 5432 failed: FATAL: server login has been failing, try again later (server_login_retry)
connection to server at “10.28.130.243”, port 5432 failed: FATAL: SSL required

I already configured to allow external access for 0.0.0.0/0

Do I have to add any database engine parameters while launch the db cluster?
Thanks

Hi, just created a new cluster in GKE for a test, installed Percona Everest and created a PG cluster (External Access enabled 0.0.0.0/0).

Installed psql using brew on my mac,

brew install libpq
brew link --force libpq

Tried to connect and it was successful.

Perhaps some policies or settings on your k8s cluster are getting in the way.

Try connecting from inside the cluster.

using a pod with Percona Distribution

or using pgAdmin

Hi @daniil.bazhenov,
I installed pgadmin4 and tried to connect to the db through the pgbouncer service by it cluster ip but it still show the SSL error


Can you show me the Database engine parameters of the postgres db you created earlier? because I am afraid that I’m missing something by leaving it empty

Thanks

Hi, I haven’t done any special settings or configurations. What cloud or environment are you using?

Look at the connection settings, Advanced section in pgAdmin connect, maybe there is an option to turn off SSL.

I also connected my Golang app to Percona Everest PG today.

I was getting an error message about SSL until I removed the sslmode=disable parameter.
before:
dsn := "host=localhost user=gorm password=gorm dbname=gorm port=5432 sslmode=disable"
after
dsn := "host=localhost user=gorm password=gorm dbname=gorm port=5432"

Hi @daniil.bazhenov,
I am currently testing in my private environment.
Deployed Percona Everest in a RKE2 cluster.
In this env, every outbound traffic to the internet must go through a proxy server.
I have already configured the proxy env in node and containerd, even in the everest subscription.
I was able to provision and connect to Mongo db and Mysql but got this issue on Postgres :smiley:
As you can see I was trying to connect for many ssl modes but no luck although I can nc to see the port 5432 is open and working

Also I want to know where these config should go to the k8s cluster:


Do they applied to one of these configmap?

Try the second option as in your screenshot, but specify the password in single quotes (')
do not specify sslmode at all

psql "user=postgres password='YOUR_PASSWORD' dbname=postgres host=HOST_IP port=5432"

hi @daniil.bazhenov ,
I didn’t leave the password in the single quotes :smiley:

1 Like

hi @daniil.bazhenov
I believe I have found something.
I created a pod run with image perconalab/percona-distribution-postgresql:13.2
and then I accessed to the pod and tried to psql to my postgres db.
I used the service pg bouncer as the pg host and failed:


But then I switched the host to the pg primary service and I was able to access my DB

I think there’s something wrong with the pgbouncer service or I’m missing some configuration on it

try image with pg version 16, not 13.2

--image=perconalab/percona-distribution-postgresql:16
kubectl run -i --rm --tty pg-client --image=perconalab/percona-distribution-postgresql:16 --restart=Never -- psql $PGBOUNCER_URI

Same result.


Do the password still the same if I access through the pgbouncer?

Ah I found some interesting logs when I tried to login using the pgbouncer address:


looks like the pgbouncer can not find the pg primary service due to DNS lookup fail.
Do we have this issue before?

I tried it now, and this command worked:

kubectl run -i --rm --tty pg-client --image=perconalab/percona-distribution-postgresql:16 -n app --restart=Never -- psql "host=pg-test-pgbouncer.app.svc port=5432 user=postgres password='YOU_PASSWORD' dbname=postgres"


I don’t think this issue is related to the command anymore :smiley: I think the pgbouncer service couldn’t resovle the address of the pg-primary service as the logs I captured on the last comment

I don’t know, I need someone smarter.

Maybe @Diogo_Recharte .

After taking a closer look at the issue, it seems like you bumped into an old issue that’s impacting our PG operator.
This forum post has a more detailed explanation of the issue and a couple of workarounds that worked for some users. However, I think you may have limited success trying these workarounds with Everest.

We have escalated this issue to our cloud team which will do a deeper investigation and will try to work on a fix in one of the upcoming PG operator releases.

We’ll report back when an eventual fix is part of Everest.

2 Likes

@duytq02 during this thread you asked another question that’s unrelated with the issue but I don’t want it to be unanswered.
You asked if the DBEngine parameters you specified in the advanced settings in Everest UI get stored in a config map somewhere. They are not, they exist as part of the DatabaseCluster CR’s .spec.engine.config field that gets reconciled into the PerconaPGCluster CR’s .spec.patroni.dynamicConfiguration.postgresql.parameters (see here).
You can check this field directly with the following command:

kubectl get pg/<YourDBName> -n <YourDBNamespace> -o jsonpath='{.spec.patroni.dynamicConfiguration.postgresql.parameters}'
1 Like

Thanks @Diogo_Recharte for the information,
I modified the PerconaPGCluster, expose the pg-ha service and now I can access that through the LoadBalancer IP of the pg-ha service of my db instead of the pgbouncer service.
Anyway, How can I completely disable the pgboucer service? I don’t want to waste a LoadBalancer for a not working service.
Can we do that by adding config to the DB Engine Parameters while creating the DB in the UI?

This issue is also affecting us and is also seen with PGadmin