How to access the percona operator for mongpdb in aks cluster externally via internal loadbalancer

Hi Team,

We have deployed percona operator for MongoDB with exposeType : clusterIP in our AKS . I have created another service with internal azure loadbalancer without modifying existing percona managed headless service and able to connect from percona-mongo-client from another pod within cluster.
Note: I have added routes to reach cluster to connect vaiaVPN.

Error:
So now i need to connect from outside of aks but having issues connecting to mongodb i.e., Starting new replica set monitor for rs/xx.xx.xx.xx:27017
2023-03-23T17:54:44.432+0530 I NETWORK [thread1] Successfully connected to 20.223.78.223:27017 (1 connections now open to xx.xx.xx.xx:27017 with a 5 second timeout)
2023-03-23T17:54:44.874+0530 I NETWORK [thread1] getaddrinfo(“percona-mongodb-db-ps-rs-1.percona-mongodb-db-ps-rs.mongodb.svc.cluster.local”) failed: Name or service not known
2023-03-23T17:54:45.399+0530 I NETWORK [thread1] getaddrinfo(“percona-mongodb-db-ps-rs-2.percona-mongodb-db-ps-rs.mongodb.svc.cluster.local”) failed: Name or service not known
2023-03-23T17:54:45.399+0530 I NETWORK [ReplicaSetMonitor-TaskExecutor-0] getaddrinfo(“percona-mongodb-db-ps-rs-0.percona-mongodb-db-ps-rs.mongodb.svc.cluster.local”) failed: Name or service not known

How do i resolve this issue?

Hello @Prasanths ,

quick question: why don’t you just configure the load balancer through a custom resource and let Operator do the heavy lifting?

For MongoDB it is not only about k8s service objects, but also how mongo replicasets are configured: TLS, primary/secondaries, etc. Operator configures mongod depending on how you configure expose (this also includes TLS and other things). This is especially interesting for scaling.

So my strong suggestion would be to avoid manual configurations and configure exposure through Custom Resources.

Hi @Sergey_Pronin , yes i have reconfigured with internal load balancer as shown in below image.
image

I am able to connect on friday evening after the changes but today , pods are going in Crashloopback error with following error