I created managed MongoDB cluster (mongo-rs) using Helm Chart with Terraform. To migrated cluster from the one on premises, I changed it to unmanaged cluster and scaled the stateful sets down to 1. As checked, the cluster status changed to error:
$ kubectl get psmdb
NAME ENDPOINT STATUS AGE
mongo--1 mongo-1-rs.mongo-1.svc.cluster.local error 6h39m
From the log in operator, it says: replset rs needs to be exposed if cluster is unmanaged
I am ok with it because the service is exposed to istio ingress gateway. anyone can confirm if it impacts on cluster functionalities?, specifically, migration, if I leave it as-is.
as I understand you mean migration from MongoDB cluster running on-prem to MongoDB running in k8s.
To perform such migration all replica nodes must be able to talk to each other, so form a full mesh. This is why we require exposing Replica Set nodes (including config server replica set).
I’m not sure about the setup that you have and how the nodes are exposed. Could you please share some diagram which would explain how nodes can reach each other?
Thanks, Spronin
We managed to provision unmanaged cluster. The reason it failed because the cluster was not exposed.
As designed, we don’t expose the cluster in the helm chart because there is an Istio ingress gateway to handle the coming requests.
With unmanaged cluster, I have to expose the cluster to ClusterIP in my case.