Percona MongoDB instance don't launch with replicas after downtime

Description:

With Percona server Mongodb in k8s using Operator,
After a case of unexpected full node downtime, the pods are back running but do not join replicas and are in a ghost state.
Does Percona have any solution, what is the manual solution to fix this problem, please help me…

Steps to Reproduce:

Shutdown k8s node or using kubectl delete --force all_pod_in_rs

Version:

I’m using Operator 1.14.0, MongoDB server 6.0.4-3

Logs:

When exec into pod, run cmd: rs.status(), output: MongoServerError: Our replica set config is invalid or we are not a member of it

Expected Result:

A automate solution or manual solution while waiting for roadmap

Actual Result:

Additional Information:

Hello @dung-tien-nguyen ,

I was not able to reproduce it with the same versions. Killed the Pods multiple times, can connect and cluster is healthy.

It also important to connect to a replicaset in the right way - for example:

mongosh "mongodb+srv://clusterAdmin:clusterAdmin123456@my-cluster-name-rs0.default.svc.cluster.local/admin?replicaSet=rs0&ssl=false"

What is psmdb object showing and Pods?

kubectl get pods
kubectl get psmdb

This error occurs when specifying the clusterServiceDNSMode=External value, and exposing using the NodePort or LoadBalancer service.
clusterServiceDNSMode=Internal does not have this problem. @Sergey_Pronin

@dung-tien-nguyen you are right, it does not work that way.
In the next release of the operator (coming in September), we will have split horizon feature. Thus you can have clusterServiceDNSMode=Internal, but at the same time use loadbalancers. This should address it.

but when set clusterServiceDNSMode=Internal and accessing from outside the k8s cluster, users can only connect to single mongoDB instances, right?