Question: How can the service get a exposeType = LoadBalancer ?
Regards
John`
PS:
This is may be not a great setup - what is needed is a service which makes sure that requests are somehow load balanced. So my hope is that the existing Service does that :
mongodb-cluster-rs03 is a headless service. It does not have any load balancing behind it and it just creates endpoint slices and domain names. You can use SRV then to connect to MongoDB within k8s cluster:
# dig SRV my-cluster-name-rs0.default.svc.cluster.local
; <<>> DiG 9.18.1-1ubuntu1.2-Ubuntu <<>> SRV my-cluster-name-rs0.default.svc.cluster.local
;; global options: +cmd
;; Got answer:
;; WARNING: .local is reserved for Multicast DNS
;; You are currently testing what happens when an mDNS query is leaked to DNS
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 3691
;; flags: qr aa rd ra; QUERY: 1, ANSWER: 3, AUTHORITY: 0, ADDITIONAL: 3
;; QUESTION SECTION:
;my-cluster-name-rs0.default.svc.cluster.local. IN SRV
;; ANSWER SECTION:
my-cluster-name-rs0.default.svc.cluster.local. 30 IN SRV 10 33 0 my-cluster-name-rs0-2.my-cluster-name-rs0.default.svc.cluster.local.
my-cluster-name-rs0.default.svc.cluster.local. 30 IN SRV 10 33 0 my-cluster-name-rs0-1.my-cluster-name-rs0.default.svc.cluster.local.
my-cluster-name-rs0.default.svc.cluster.local. 30 IN SRV 10 33 0 my-cluster-name-rs0-0.my-cluster-name-rs0.default.svc.cluster.local.
;; ADDITIONAL SECTION:
my-cluster-name-rs0-2.my-cluster-name-rs0.default.svc.cluster.local. 30 IN A 10.88.0.10
my-cluster-name-rs0-1.my-cluster-name-rs0.default.svc.cluster.local. 30 IN A 10.88.1.4
my-cluster-name-rs0-0.my-cluster-name-rs0.default.svc.cluster.local. 30 IN A 10.88.2.5
;; Query time: 1 msec
;; SERVER: 10.92.0.10#53(10.92.0.10) (UDP)
;; WHEN: Mon Nov 14 10:13:04 UTC 2022
;; MSG SIZE rcvd: 372
If I use the headless service (or whatever method) to access the pods, then it could be that a “command” can land on a pod (or setup) where a write is not possible:
17:02:47,971 ERROR [stderr] (Thread-119) com.mongodb.MongoNotPrimaryException: Command failed with error 10107 (NotWritablePrimary): 'not master' on server mongodb01-cluster-rs.performance-mongodb03.svc.cluster.local:27017. The full response is {"topologyVersion": {"processId": {"$oid": "63a07b194865cc1078e16239"}, "counter": {"$numberLong": "3"}}, "operationTime": {"$timestamp": {"t": 1671465759, "i": 1}}, "ok": 0.0, "errmsg": "not master", "code": 10107, "codeName": "NotWritablePrimary", "$clusterTime": {"clusterTime": {"$timestamp": {"t": 1671465759, "i": 1}}, "signature": {"hash": {"$binary": "GZfQ/d7kgm74uxk4yDkqvBoDHVg=", "$type": "00"}, "keyId": {"$numberLong": "7178873076023558148"}}}}
Any idea how to prevent ? (I am using mongo-java-driver)
Was on a long vacation … … in the meantime we have realized the following
“General rule” - if running the cluster as a Statefulset, then it works totally stable. The Statefulset handles the load balancing and the detection of the primary/secondary nodes.
=> run with this setting
replsets:
expose:
enabled: false
=> everything else is NOT working stable
proposed as Sergey : does not work because the detection of the primary is missing
expose.enabled = true : does not work because after a restart of the MongoDB cluster, because the configuration re. the Service IP are not matching anymore the whole cluster is “kaput”. Not sure if this home made by Percona.
==> so @Roman_Tarasov … run it with expose.enabled = false, works very stable, much better performance then sharded, no collection limits due to listDatabases, but big CONS: MongoDB can not be accessed with tools like Mongo-Compass outside the cloud env - obviously port-forward can not be done due to missing service(s).
expose.enabled = true : does not work because after a restart of the MongoDB cluster, because the configuration re. the Service IP are not matching anymore the whole cluster is “kaput”. Not sure if this home made by Percona.
K8SPSMDB-867 The Operator now configures replset members using local fully-qualified domain names (FQDN) resolvable and available only from inside the cluster instead of using IP addresses; the old behavior can be restored by setting the clusterServiceDNSMode option to External
Could you please let me know if it solves the prob for you?
when i use --server-side, i got below error. How to fix that ?
error: Apply failed with 1 conflict: conflict with “kubectl-client-side-apply” using apps/v1: .spec.template.spec.containers[name=“percona-server-mongodb-operator”].env[name=“POD_NAME”].valueFrom.fieldRef
Please review the fields above–they currently have other managers. Here
are the ways you can resolve this warning:
If you intend to manage all of these fields, please re-run the apply
command with the --force-conflicts flag.
If you do not intend to manage all of the fields, please edit your
manifest to remove references to the fields that should keep their
current managers.
You may co-own fields by updating your manifest to match the existing
value; in this case, you’ll become the manager if the other manager(s)
stop managing the field (remove it from their configuration).
We arent able to do mongo restore it gives error [couldn’t connect to server] we have percona server in same K8 cluster and same namespace. we can ping the percona from another pod and are able to take the dump.
We have exposed server to type loadbalancer (rs0 [clusterip], rs0-0 [loadbalancer], rs0-1 [loadbalancer], rs0-2 [loadbalancer] )
@Bakhtawar_Ali this topic has been solved and closed.
I would appreciate if you create the new topic in the forum and share more details - versions, your manifests, etc.