MongoDB exposed via Internal NLB causing sporadic connection timeouts

Description:

We have deployed 2 mongdb instances to our Kubernetes Cluster.

1, Exposed via a Public NLB with clusterServiceDNSMode: "External" (WORKS)
2. Exposed via an Internal NLB. (NOT WORKING)

We are experiencing sporadic timeout issues with the 2nd instance (Internal NLB) when attempting to connect.

Steps to Reproduce:

  1. Config of First MongoDB Instance
# Default values for psmdb-cluster.
# This is a YAML-formatted file.
# Declare variables to be passed into your templates.

# Platform type: kubernetes, openshift
# platform: kubernetes

# Cluster DNS Suffix
# clusterServiceDNSSuffix: svc.cluster.local
clusterServiceDNSMode: "External"

finalizers:
## Set this if you want that operator deletes the primary pod last
  - delete-psmdb-pods-in-order
## Set this if you want to delete database persistent volumes on cluster deletion
#  - delete-psmdb-pvc

nameOverride: ""
fullnameOverride: ""

crVersion: 1.14.0
pause: false
unmanaged: false
allowUnsafeConfigurations: true
# ignoreAnnotations:
#   - service.beta.kubernetes.io/aws-load-balancer-backend-protocol
# ignoreLabels:
#   - rack
multiCluster:
  enabled: false
  # DNSSuffix: svc.clusterset.local
updateStrategy: SmartUpdate
upgradeOptions:
  versionServiceEndpoint: https://check.percona.com
  apply: disabled
  schedule: "0 2 * * *"
  setFCV: false

image:
  repository: percona/percona-server-mongodb
  tag: 6.0.4-3

imagePullPolicy: Always
# imagePullSecrets: []
# initImage:
#   repository: percona/percona-server-mongodb-operator
#   tag: 1.14.0
# initContainerSecurityContext: {}
# tls:
#   # 90 days in hours
#   certValidityDuration: 2160h
secrets: {}
  # If you set users secret here the operator will use existing one or generate random values
  # If not set the operator generates the default secret with name <cluster_name>-secrets
  # users: my-cluster-name-secrets
  # encryptionKey: my-cluster-name-mongodb-encryption-key

pmm:
  enabled: false
  image:
    repository: percona/pmm-client
    tag: 2.35.0
  serverHost: monitoring-service

replsets:
  - name: rs0
    size: 1
    # externalNodes:
    # - host: 34.124.76.90
    # - host: 34.124.76.91
    #   port: 27017
    #   votes: 0
    #   priority: 0
    # - host: 34.124.76.92
    # configuration: |
    #   operationProfiling:
    #     mode: slowOp
    #   systemLog:
    #     verbosity: 1
    antiAffinityTopologyKey: "kubernetes.io/hostname"
    # tolerations: []
    # priorityClass: ""
    # annotations: {}
    # labels: {}
    # nodeSelector:
    # livenessProbe:
    #   failureThreshold: 4
    #   initialDelaySeconds: 60
    #   periodSeconds: 30
    #   timeoutSeconds: 10
    #   startupDelaySeconds: 7200
    # readinessProbe:
    #   failureThreshold: 8
    #   initialDelaySeconds: 10
    #   periodSeconds: 3
    #   successThreshold: 1
    #   timeoutSeconds: 2
    # runtimeClassName: image-rc
    # storage:
    #   engine: wiredTiger
    #   wiredTiger:
    #     engineConfig:
    #       cacheSizeRatio: 0.5
    #       directoryForIndexes: false
    #       journalCompressor: snappy
    #     collectionConfig:
    #       blockCompressor: snappy
    #     indexConfig:
    #       prefixCompression: true
    #   inMemory:
    #     engineConfig:
    #        inMemorySizeRatio: 0.5
    sidecars:
    - image: percona/mongodb_exporter:0.36
      env:
      - name: EXPORTER_USER
        valueFrom:
          secretKeyRef:
            name: psmdb-db-secrets
            key: MONGODB_CLUSTER_MONITOR_USER
      - name: EXPORTER_PASS
        valueFrom:
          secretKeyRef:
            name: psmdb-db-secrets
            key: MONGODB_CLUSTER_MONITOR_PASSWORD
      - name: POD_IP
        valueFrom:
          fieldRef:
            fieldPath: status.podIP
      - name: MONGODB_URI
        value: "mongodb://$(EXPORTER_USER):$(EXPORTER_PASS)@$(POD_IP):27017"
      args: ["--discovering-mode", "--compatible-mode", "--collect-all", "--mongodb.uri=$(MONGODB_URI)"]
      name: metrics
    #   volumeMounts:
    #     - mountPath: /volume1
    #       name: sidecar-volume-claim
    #     - mountPath: /secret
    #       name: sidecar-secret
    #     - mountPath: /configmap
    #       name: sidecar-config
    # sidecarVolumes:
    # - name: sidecar-secret
    #   secret:
    #     secretName: mysecret
    # - name: sidecar-config
    #   configMap:
    #     name: myconfigmap
    # sidecarPVCs:
    # - apiVersion: v1
    #   kind: PersistentVolumeClaim
    #   metadata:
    #     name: sidecar-volume-claim
    #   spec:
    #     resources:
    #       requests:
    #         storage: 1Gi
    #     volumeMode: Filesystem
    #     accessModes:
    #       - ReadWriteOnce
    podDisruptionBudget:
      maxUnavailable: 1
    expose:
      enabled: true
      exposeType: LoadBalancer
      # loadBalancerSourceRanges:
      #   - 10.0.0.0/8
      serviceAnnotations:
        service.beta.kubernetes.io/aws-load-balancer-scheme: internet-facing
        # service.beta.kubernetes.io/aws-load-balancer-backend-protocol: http
      # serviceLabels: 
      #   some-label: some-key
    nonvoting:
      enabled: false
      # podSecurityContext: {}
      # containerSecurityContext: {}
      size: 3
      # configuration: |
      #   operationProfiling:
      #     mode: slowOp
      #   systemLog:
      #     verbosity: 1
      antiAffinityTopologyKey: "kubernetes.io/hostname"
      # tolerations: []
      # priorityClass: ""
      # annotations: {}
      # labels: {}
      # nodeSelector: {}
      podDisruptionBudget:
        maxUnavailable: 1
      resources:
        limits:
          cpu: "300m"
          memory: "0.5G"
        requests:
          cpu: "300m"
          memory: "0.5G"
      volumeSpec:
        # emptyDir: {}
        # hostPath:
        #   path: /data
        pvc:
          # annotations:
          #   volume.beta.kubernetes.io/storage-class: example-hostpath
          # labels:
          #   rack: rack-22
          # storageClassName: standard
          # accessModes: [ "ReadWriteOnce" ]
          resources:
            requests:
              storage: 3Gi
    arbiter:
      enabled: false
      size: 1
      antiAffinityTopologyKey: "kubernetes.io/hostname"
      # tolerations: []
      # priorityClass: ""
      # annotations: {}
      # labels: {}
      # nodeSelector: {}
    # schedulerName: ""
    # resources:
    #   limits:
    #     cpu: "300m"
    #     memory: "0.5G"
    #   requests:
    #     cpu: "300m"
    #     memory: "0.5G"
    volumeSpec:
      # emptyDir: {}
      # hostPath:
      #   path: /data
      pvc:
        # annotations:
        #   volume.beta.kubernetes.io/storage-class: example-hostpath
        # labels:
        #   rack: rack-22
        storageClassName: mongodb
        # accessModes: [ "ReadWriteOnce" ]
        resources:
          requests:
            storage: 250Gi

sharding:
  enabled: false 

backup:
  enabled: false
  image:
    repository: percona/percona-backup-mongodb
    tag: 2.0.5
  serviceAccountName: percona-server-mongodb-operator
  #  annotations:
  #  iam.amazonaws.com/role: arn:aws:iam::700849607999:role/acme-test-default-eks-mongodb
  # resources:
  #   limits:
  #     cpu: "300m"
  #     memory: "0.5G"
  #   requests:
  #     cpu: "300m"
  #     memory: "0.5G"
  storages:
    s3-us-east:
      type: s3
      s3:
        bucket: bucket
        credentialsSecret:secret
        region: us-east-2
        prefix: ""
        uploadPartSize: 10485760
        maxUploadParts: 10000
        storageClass: STANDARD
        insecureSkipTLSVerify: false
    # minio:
    #   type: s3
    #   s3:
    #     bucket: MINIO-BACKUP-BUCKET-NAME-HERE
    #     region: us-east-1
    #     credentialsSecret: my-cluster-name-backup-minio
    #     endpointUrl: http://minio.psmdb.svc.cluster.local:9000/minio/
    #     prefix: ""
    #   azure-blob:
    #     type: azure
    #     azure:
    #       container: CONTAINER-NAME
    #       prefix: PREFIX-NAME
    #       credentialsSecret: SECRET-NAME
  pitr:
    enabled: false
    # oplogSpanMin: 10
    # compressionType: gzip
    # compressionLevel: 6
  tasks:
   - name: "daily-s3-backup"
     enabled: true
     schedule: "0 1 * * *"
     keep: 3
     type: logical
     storageName: s3-us-east

  # - name: daily-s3-us-west
  #   enabled: true
  #   schedule: "0 0 * * *"
  #   keep: 3
  #   storageName: s3-us-west
  #   compressionType: gzip
  # - name: weekly-s3-us-west
  #   enabled: false
  #   schedule: "0 0 * * 0"
  #   keep: 5
  #   storageName: s3-us-west
  #   compressionType: gzip
  # - name: weekly-s3-us-west-physical
  #   enabled: false
  #   schedule: "0 5 * * 0"
  #   keep: 5
  #   type: physical
  #   storageName: s3-us-west
  #   compressionType: gzip
  #   compressionLevel: 6

# If you set users here the secret will be constructed by helm with these values
# users:
#   MONGODB_BACKUP_USER: backup
#   MONGODB_BACKUP_PASSWORD: backup123456
#   MONGODB_DATABASE_ADMIN_USER: databaseAdmin
#   MONGODB_DATABASE_ADMIN_PASSWORD: databaseAdmin123456
#   MONGODB_CLUSTER_ADMIN_USER: clusterAdmin
#   MONGODB_CLUSTER_ADMIN_PASSWORD: clusterAdmin123456
#   MONGODB_CLUSTER_MONITOR_USER: clusterMonitor
#   MONGODB_CLUSTER_MONITOR_PASSWORD: clusterMonitor123456
#   MONGODB_USER_ADMIN_USER: userAdmin
#   MONGODB_USER_ADMIN_PASSWORD: userAdmin123456
#   PMM_SERVER_API_KEY: apikey
#   # PMM_SERVER_USER: admin
#   # PMM_SERVER_PASSWORD: admin
  1. Config of Second MongoDB instance.
# Default values for psmdb-cluster.
# This is a YAML-formatted file.
# Declare variables to be passed into your templates.

# Platform type: kubernetes, openshift
# platform: kubernetes

# Cluster DNS Suffix
# clusterServiceDNSSuffix: svc.cluster.local
# clusterServiceDNSMode: "Internal"

finalizers:
## Set this if you want that operator deletes the primary pod last
  - delete-psmdb-pods-in-order
## Set this if you want to delete database persistent volumes on cluster deletion
#  - delete-psmdb-pvc

nameOverride: ""
fullnameOverride: ""

crVersion: 1.14.0
pause: false
unmanaged: false
allowUnsafeConfigurations: true
# ignoreAnnotations:
#   - service.beta.kubernetes.io/aws-load-balancer-backend-protocol
# ignoreLabels:
#   - rack
multiCluster:
  enabled: false
  # DNSSuffix: svc.clusterset.local
updateStrategy: SmartUpdate
upgradeOptions:
  versionServiceEndpoint: https://check.percona.com
  apply: disabled
  schedule: "0 2 * * *"
  setFCV: false

image:
  repository: percona/percona-server-mongodb
  tag: 6.0.4-3

imagePullPolicy: Always
# imagePullSecrets: []
# initImage:
#   repository: percona/percona-server-mongodb-operator
#   tag: 1.14.0
# initContainerSecurityContext: {}
# tls:
#   # 90 days in hours
#   certValidityDuration: 2160h
secrets: {}
  # If you set users secret here the operator will use existing one or generate random values
  # If not set the operator generates the default secret with name <cluster_name>-secrets
  # users: my-cluster-name-secrets
  # encryptionKey: my-cluster-name-mongodb-encryption-key

pmm:
  enabled: false
  image:
    repository: percona/pmm-client
    tag: 2.35.0
  serverHost: monitoring-service

replsets:
  - name: rs0
    size: 1
    # externalNodes:
    # - host: 34.124.76.90
    # - host: 34.124.76.91
    #   port: 27017
    #   votes: 0
    #   priority: 0
    # - host: 34.124.76.92
    # configuration: |
    #   operationProfiling:
    #     mode: slowOp
    #   systemLog:
    #     verbosity: 1
    antiAffinityTopologyKey: "kubernetes.io/hostname"
    # tolerations: []
    # priorityClass: ""
    # annotations: {}
    # labels: {}
    # nodeSelector:
    # livenessProbe:
    #   failureThreshold: 4
    #   initialDelaySeconds: 60
    #   periodSeconds: 30
    #   timeoutSeconds: 10
    #   startupDelaySeconds: 7200
    # readinessProbe:
    #   failureThreshold: 8
    #   initialDelaySeconds: 10
    #   periodSeconds: 3
    #   successThreshold: 1
    #   timeoutSeconds: 2
    # runtimeClassName: image-rc
    # storage:
    #   engine: wiredTiger
    #   wiredTiger:
    #     engineConfig:
    #       cacheSizeRatio: 0.5
    #       directoryForIndexes: false
    #       journalCompressor: snappy
    #     collectionConfig:
    #       blockCompressor: snappy
    #     indexConfig:
    #       prefixCompression: true
    #   inMemory:
    #     engineConfig:
    #        inMemorySizeRatio: 0.5
    sidecars:
    - image: percona/mongodb_exporter:0.36
      env:
      - name: EXPORTER_USER
        valueFrom:
          secretKeyRef:
            name: psmdb-db-internal-secrets
            key: MONGODB_CLUSTER_MONITOR_USER
      - name: EXPORTER_PASS
        valueFrom:
          secretKeyRef:
            name: psmdb-db-internal-secrets
            key: MONGODB_CLUSTER_MONITOR_PASSWORD
      - name: POD_IP
        valueFrom:
          fieldRef:
            fieldPath: status.podIP
      - name: MONGODB_URI
        value: "mongodb://$(EXPORTER_USER):$(EXPORTER_PASS)@$(POD_IP):27017"
      args: ["--discovering-mode", "--compatible-mode", "--collect-all", "--mongodb.uri=$(MONGODB_URI)"]
      name: metrics
    #   volumeMounts:
    #     - mountPath: /volume1
    #       name: sidecar-volume-claim
    #     - mountPath: /secret
    #       name: sidecar-secret
    #     - mountPath: /configmap
    #       name: sidecar-config
    # sidecarVolumes:
    # - name: sidecar-secret
    #   secret:
    #     secretName: mysecret
    # - name: sidecar-config
    #   configMap:
    #     name: myconfigmap
    # sidecarPVCs:
    # - apiVersion: v1
    #   kind: PersistentVolumeClaim
    #   metadata:
    #     name: sidecar-volume-claim
    #   spec:
    #     resources:
    #       requests:
    #         storage: 1Gi
    #     volumeMode: Filesystem
    #     accessModes:
    #       - ReadWriteOnce
    podDisruptionBudget:
      maxUnavailable: 1
    expose:
      enabled: true
      exposeType: LoadBalancer
      # loadBalancerSourceRanges:
      #   - 10.0.0.0/8
      serviceAnnotations:
        # service.beta.kubernetes.io/aws-load-balancer-scheme: internal
        # service.beta.kubernetes.io/aws-load-balancer-backend-protocol: http
      # serviceLabels: 
      #   some-label: some-key
    nonvoting:
      enabled: false
      # podSecurityContext: {}
      # containerSecurityContext: {}
      size: 3
      # configuration: |
      #   operationProfiling:
      #     mode: slowOp
      #   systemLog:
      #     verbosity: 1
      antiAffinityTopologyKey: "kubernetes.io/hostname"
      # tolerations: []
      # priorityClass: ""
      # annotations: {}
      # labels: {}
      # nodeSelector: {}
      podDisruptionBudget:
        maxUnavailable: 1
      resources:
        limits:
          cpu: "300m"
          memory: "0.5G"
        requests:
          cpu: "300m"
          memory: "0.5G"
      volumeSpec:
        # emptyDir: {}
        # hostPath:
        #   path: /data
        pvc:
          # annotations:
          #   volume.beta.kubernetes.io/storage-class: example-hostpath
          # labels:
          #   rack: rack-22
          # storageClassName: standard
          # accessModes: [ "ReadWriteOnce" ]
          resources:
            requests:
              storage: 3Gi
    arbiter:
      enabled: false
      size: 1
      antiAffinityTopologyKey: "kubernetes.io/hostname"
      # tolerations: []
      # priorityClass: ""
      # annotations: {}
      # labels: {}
      # nodeSelector: {}
    # schedulerName: ""
    # resources:
    #   limits:
    #     cpu: "300m"
    #     memory: "0.5G"
    #   requests:
    #     cpu: "300m"
    #     memory: "0.5G"
    volumeSpec:
      # emptyDir: {}
      # hostPath:
      #   path: /data
      pvc:
        # annotations:
        #   volume.beta.kubernetes.io/storage-class: example-hostpath
        # labels:
        #   rack: rack-22
        storageClassName: mongodb
        # accessModes: [ "ReadWriteOnce" ]
        resources:
          requests:
            storage: 250Gi

sharding:
  enabled: false 

backup:
  enabled: false
  image:
    repository: percona/percona-backup-mongodb
    tag: 2.0.5
  serviceAccountName: percona-server-mongodb-operator
  #  annotations:
  #  iam.amazonaws.com/role: arn:aws:iam::700849607999:role/acme-test-default-eks-mongodb
  # resources:
  #   limits:
  #     cpu: "300m"
  #     memory: "0.5G"
  #   requests:
  #     cpu: "300m"
  #     memory: "0.5G"
  storages:
    s3-us-east:
      type: s3
      s3:
        bucket: acme-prod-mongodb-backup
        credentialsSecret: prod-aws-mongodb
        region: us-east-2
        prefix: ""
        uploadPartSize: 10485760
        maxUploadParts: 10000
        storageClass: STANDARD
        insecureSkipTLSVerify: false
    # minio:
    #   type: s3
    #   s3:
    #     bucket: MINIO-BACKUP-BUCKET-NAME-HERE
    #     region: us-east-1
    #     credentialsSecret: my-cluster-name-backup-minio
    #     endpointUrl: http://minio.psmdb.svc.cluster.local:9000/minio/
    #     prefix: ""
    #   azure-blob:
    #     type: azure
    #     azure:
    #       container: CONTAINER-NAME
    #       prefix: PREFIX-NAME
    #       credentialsSecret: SECRET-NAME
  pitr:
    enabled: false
    # oplogSpanMin: 10
    # compressionType: gzip
    # compressionLevel: 6
  tasks:
   - name: "daily-s3-backup"
     enabled: true
     schedule: "0 1 * * *"
     keep: 3
     type: logical
     storageName: s3-us-east

  # - name: daily-s3-us-west
  #   enabled: true
  #   schedule: "0 0 * * *"
  #   keep: 3
  #   storageName: s3-us-west
  #   compressionType: gzip
  # - name: weekly-s3-us-west
  #   enabled: false
  #   schedule: "0 0 * * 0"
  #   keep: 5
  #   storageName: s3-us-west
  #   compressionType: gzip
  # - name: weekly-s3-us-west-physical
  #   enabled: false
  #   schedule: "0 5 * * 0"
  #   keep: 5
  #   type: physical
  #   storageName: s3-us-west
  #   compressionType: gzip
  #   compressionLevel: 6

# If you set users here the secret will be constructed by helm with these values
# users:
#   MONGODB_BACKUP_USER: backup
#   MONGODB_BACKUP_PASSWORD: backup123456
#   MONGODB_DATABASE_ADMIN_USER: databaseAdmin
#   MONGODB_DATABASE_ADMIN_PASSWORD: databaseAdmin123456
#   MONGODB_CLUSTER_ADMIN_USER: clusterAdmin
#   MONGODB_CLUSTER_ADMIN_PASSWORD: clusterAdmin123456
#   MONGODB_CLUSTER_MONITOR_USER: clusterMonitor
#   MONGODB_CLUSTER_MONITOR_PASSWORD: clusterMonitor123456
#   MONGODB_USER_ADMIN_USER: userAdmin
#   MONGODB_USER_ADMIN_PASSWORD: userAdmin123456
#   PMM_SERVER_API_KEY: apikey
#   # PMM_SERVER_USER: admin
#   # PMM_SERVER_PASSWORD: admin
  1. kubectl run my-shell --rm -i --tty --image ubuntu -- bash (create temp pod inside k8s to run test)

  2. curl NLB_ADDRESS:27017 -v (run 10x)

( The problem was identified via an application running mongodb driver in kubernetes, but can be replicated by simply running a curl command, we are aware you cant interface with mongodb over http, but it can help us to know if the connection can be established and helps us replicate the problem here )

Version:

helm ls -n mognodb

  1. psmdb-db (first mongodb instance)
  2. psmdb-db-internal (second mongodb instance) - One with the issue.
NAME                    NAMESPACE       REVISION        UPDATED                                 STATUS          CHART                   APP VERSION
psmdb-db                mongodb         2               2023-08-03 09:31:58.57105479 +0100 BST  deployed        psmdb-db-1.14.3         1.14.0
psmdb-db-internal       mongodb         1               2023-10-05 08:54:27.817808846 +0100 BST deployed        psmdb-db-1.14.4         1.14.0
psmdb-operator          mongodb         1               2023-05-06 10:35:46.776038271 +0100 BST deployed        psmdb-operator-1.14.2   1.14.0

Logs:

[If applicable, include any relevant log files or error messages]

Expected Result:

I expect us to make connections to mongodb without sporadic timeouts

Actual Result:

We are occasionally receiving sporadic timoeuts when attempting to connect to mongodb. Initially exposed via one of our applications running in kubernetes and then replicated via curl command.

Additional Information:

First MongoDB rs.status()

{
  set: 'rs0',
  date: ISODate("2023-10-12T14:28:32.203Z"),
  myState: 1,
  term: Long("1"),
  syncSourceHost: '',
  syncSourceId: -1,
  heartbeatIntervalMillis: Long("2000"),
  majorityVoteCount: 1,
  writeMajorityCount: 1,
  votingMembersCount: 1,
  writableVotingMembersCount: 1,
  optimes: {
    lastCommittedOpTime: { ts: Timestamp({ t: 1697120910, i: 5 }), t: Long("1") },
    lastCommittedWallTime: ISODate("2023-10-12T14:28:30.783Z"),
    readConcernMajorityOpTime: { ts: Timestamp({ t: 1697120910, i: 5 }), t: Long("1") },
    appliedOpTime: { ts: Timestamp({ t: 1697120910, i: 5 }), t: Long("1") },
    durableOpTime: { ts: Timestamp({ t: 1697120910, i: 5 }), t: Long("1") },
    lastAppliedWallTime: ISODate("2023-10-12T14:28:30.783Z"),
    lastDurableWallTime: ISODate("2023-10-12T14:28:30.783Z")
  },
  lastStableRecoveryTimestamp: Timestamp({ t: 1697120885, i: 5 }),
  electionCandidateMetrics: {
    lastElectionReason: 'electionTimeout',
    lastElectionDate: ISODate("2023-05-22T12:06:40.595Z"),
    electionTerm: Long("1"),
    lastCommittedOpTimeAtElection: { ts: Timestamp({ t: 1684757200, i: 1 }), t: Long("-1") },
    lastSeenOpTimeAtElection: { ts: Timestamp({ t: 1684757200, i: 1 }), t: Long("-1") },
    numVotesNeeded: 1,
    priorityAtElection: 1,
    electionTimeoutMillis: Long("10000"),
    newTermStartDate: ISODate("2023-05-22T12:06:40.621Z"),
    wMajorityWriteAvailabilityDate: ISODate("2023-05-22T12:06:40.635Z")
  },
  members: [
    {
      _id: 0,
      name: 'k8s-mongodb-psmdbdbr-73eba8bcd6-5bb784fe1b0468b0.elb.us-east-2.amazonaws.com:27017',
      health: 1,
      state: 1,
      stateStr: 'PRIMARY',
      uptime: 12363735,
      optime: { ts: Timestamp({ t: 1697120910, i: 5 }), t: Long("1") },
      optimeDate: ISODate("2023-10-12T14:28:30.000Z"),
      lastAppliedWallTime: ISODate("2023-10-12T14:28:30.783Z"),
      lastDurableWallTime: ISODate("2023-10-12T14:28:30.783Z"),
      syncSourceHost: '',
      syncSourceId: -1,
      infoMessage: '',
      electionTime: Timestamp({ t: 1684757200, i: 2 }),
      electionDate: ISODate("2023-05-22T12:06:40.000Z"),
      configVersion: 4,
      configTerm: 1,
      self: true,
      lastHeartbeatMessage: ''
    }
  ],
  ok: 1,
  '$clusterTime': {
    clusterTime: Timestamp({ t: 1697120910, i: 5 }),
    signature: {
      hash: Binary(Buffer.from("aea8a569a242895e2c291877718a704d12ad92b3", "hex"), 0),
      keyId: Long("7235977075700531207")
    }
  },
  operationTime: Timestamp({ t: 1697120910, i: 5 })
}

Second MongoDB rs.status()

{
  set: 'rs0',
  date: ISODate("2023-10-12T14:26:54.220Z"),
  myState: 1,
  term: Long("1"),
  syncSourceHost: '',
  syncSourceId: -1,
  heartbeatIntervalMillis: Long("2000"),
  majorityVoteCount: 1,
  writeMajorityCount: 1,
  votingMembersCount: 1,
  writableVotingMembersCount: 1,
  optimes: {
    lastCommittedOpTime: { ts: Timestamp({ t: 1697120814, i: 1 }), t: Long("1") },
    lastCommittedWallTime: ISODate("2023-10-12T14:26:54.030Z"),
    readConcernMajorityOpTime: { ts: Timestamp({ t: 1697120814, i: 1 }), t: Long("1") },
    appliedOpTime: { ts: Timestamp({ t: 1697120814, i: 1 }), t: Long("1") },
    durableOpTime: { ts: Timestamp({ t: 1697120814, i: 1 }), t: Long("1") },
    lastAppliedWallTime: ISODate("2023-10-12T14:26:54.030Z"),
    lastDurableWallTime: ISODate("2023-10-12T14:26:54.030Z")
  },
  lastStableRecoveryTimestamp: Timestamp({ t: 1697120799, i: 1 }),
  electionCandidateMetrics: {
    lastElectionReason: 'electionTimeout',
    lastElectionDate: ISODate("2023-10-05T07:55:05.129Z"),
    electionTerm: Long("1"),
    lastCommittedOpTimeAtElection: { ts: Timestamp({ t: 1696492505, i: 1 }), t: Long("-1") },
    lastSeenOpTimeAtElection: { ts: Timestamp({ t: 1696492505, i: 1 }), t: Long("-1") },
    numVotesNeeded: 1,
    priorityAtElection: 1,
    electionTimeoutMillis: Long("10000"),
    newTermStartDate: ISODate("2023-10-05T07:55:05.156Z"),
    wMajorityWriteAvailabilityDate: ISODate("2023-10-05T07:55:05.171Z")
  },
  members: [
    {
      _id: 0,
      name: 'psmdb-db-internal-rs0-0.psmdb-db-internal-rs0.mongodb.svc.cluster.local:27017',
      health: 1,
      state: 1,
      stateStr: 'PRIMARY',
      uptime: 628334,
      optime: { ts: Timestamp({ t: 1697120814, i: 1 }), t: Long("1") },
      optimeDate: ISODate("2023-10-12T14:26:54.000Z"),
      lastAppliedWallTime: ISODate("2023-10-12T14:26:54.030Z"),
      lastDurableWallTime: ISODate("2023-10-12T14:26:54.030Z"),
      syncSourceHost: '',
      syncSourceId: -1,
      infoMessage: '',
      electionTime: Timestamp({ t: 1696492505, i: 2 }),
      electionDate: ISODate("2023-10-05T07:55:05.000Z"),
      configVersion: 3,
      configTerm: 1,
      self: true,
      lastHeartbeatMessage: ''
    }
  ],
  ok: 1,
  '$clusterTime': {
    clusterTime: Timestamp({ t: 1697120814, i: 1 }),
    signature: {
      hash: Binary(Buffer.from("97bbd6e9517fbb63cf8d2152befbeb741b882aa3", "hex"), 0),
      keyId: Long("7286379826884116486")
    }
  },
  operationTime: Timestamp({ t: 1697120814, i: 1 })
}

We are migrating from a public facing mongodb instance to a private/internal facing/ mongodb instance

We identified what the problem was and tested the solution and its now working. Incase anyone comes across this. Make sure you set the annotation:

service.beta.kubernetes.io/aws-load-balancer-target-group-attributes: preserve_client_ip.enabled=false

However as our mongodb is already deployed, we had to manually look for the LB Target Group and Disable this.

My only concern left is if we are okay to leave this as is, with the manual change to the target group, is it going to revert?

Is it possible to add the above annotation to our existing config and reapply?

Resources:

https://kubernetes-sigs.github.io/aws-load-balancer-controller/v2.6/guide/service/annotations/