The operator is trying to modify headless service with loadBalancerSourceRanges attribute

Description:

replsets option loadBalancerSourceRanges doesn’t work as expected.

Steps to Reproduce:

After fresh install of replicaset with size 3 I have 4 services:

mongodb-psmdb-db-rsmain             ClusterIP      None            <none>         27017/TCP         3h21m
mongodb-psmdb-db-rsmain-0           LoadBalancer   10.50.213.179   10.130.86.58   27017:32643/TCP   3h21m
mongodb-psmdb-db-rsmain-1           LoadBalancer   10.50.249.43    10.130.86.60   27017:31465/TCP   3h21m
mongodb-psmdb-db-rsmain-arbiter-0   LoadBalancer   10.50.231.155   10.130.86.59   27017:31316/TCP   3h21m

Expose configuration looks:

expose:
    enabled: true
    exposeType: LoadBalancer
    serviceAnnotations:
      networking.gke.io/load-balancer-type: Internal

But when I try to limit source IP ranges with:

loadBalancerSourceRanges:
    - 10.130.86.2/32

I receive the next error in operator logs:

2023-07-23T10:06:49.373Z        ERROR   Reconciler error        {"controller": "psmdb-controller", "object": {"name":"mongodb-psmdb-db","namespace":"mongodb"}, "namespace": "mongodb", "name": "mongodb-psmdb-db", "reconcileID": "b7788202-241d-43cc-9ec4-b55d847bf06d", "error": "create or update service for replset rsmain: Service \"mongodb-psmdb-db-rsmain\" is invalid: spec.LoadBalancerSourceRanges: Forbidden: may only be used when `type` is 'LoadBalancer'", "errorVerbose": "Service \"mongodb-psmdb-db-rsmain\" is invalid: spec.LoadBalancerSourceRanges: Forbidden: may only be used when `type` is 'LoadBalancer'\ncreate or update service for replset rsmain\ngithub.com/percona/percona-server-mongodb-operator/pkg/controller/perconaservermongodb.(*ReconcilePerconaServerMongoDB).Reconcile\n\t/go/src/github.com/percona/percona-server-mongodb-operator/pkg/controller/perconaservermongodb/psmdb_controller.go:470\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Reconcile\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.14.4/pkg/internal/controller/controller.go:122\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.14.4/pkg/internal/controller/controller.go:323\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.14.4/pkg/internal/controller/controller.go:274\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func2.2\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.14.4/pkg/internal/controller/controller.go:235\nruntime.goexit\n\t/usr/local/go/src/runtime/asm_amd64.s:1594"}
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler
        /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.14.4/pkg/internal/controller/controller.go:329
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem
        /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.14.4/pkg/internal/controller/controller.go:274
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func2.2
        /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.14.4/pkg/internal/controller/controller.go:235

Version:

percona-server-mongodb-operator:1.14.0

Expected Result:

loadBalancerSourceRanges attribute should be added only to replicaset members.

Actual Result:

the operator is trying to add loadBalancerSourceRanges attribute to all services even with type ClusterIP.

Do you have any ideas or ETA to fix the issue?

Hi @Denys_Kyrii,

Welcome to Percona Community!!
From the Operator logs, it seems that the loadBalancerSourceRanges parameter is not placed properly in cr.yaml file, it should be under “spec.resplsets.expose”. Kindly share the complete cr.yaml file for further analysis.

Kindly refer the below link for cr.yaml configuration for your reference.

Regards,
Parag

I double checked the location of the directive in my cr and it doesn’t work.

apiVersion: psmdb.percona.com/v1
kind: PerconaServerMongoDB
metadata:
  annotations:
    kubectl.kubernetes.io/last-applied-configuration: |
      {"apiVersion":"psmdb.percona.com/v1","kind":"PerconaServerMongoDB"}
    meta.helm.sh/release-name: mongodb
    meta.helm.sh/release-namespace: mongodb
  creationTimestamp: "2023-07-30T16:36:15Z"
  finalizers:
  - delete-psmdb-pods-in-order
  - delete-psmdb-pvc
  generation: 2
  labels:
    app.kubernetes.io/instance: mongodb
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/name: psmdb-db
    app.kubernetes.io/version: 1.14.0
    helm.sh/chart: psmdb-db-1.14.3
  name: mongodb-psmdb-db
  namespace: mongodb
  resourceVersion: "3683083"
  uid: b865616a-b7f0-4d56-99fa-37e7e577f350
spec:
  allowUnsafeConfigurations: true
  backup:
    enabled: true
    image: prod.reg.io/percona-backup-mongodb:1.8.0-1
    pitr:
      compressionLevel: 3
      enabled: true
      oplogSpanMin: 10
    resources:
      limits:
        cpu: 300m
        memory: 1024Mi
      requests:
        cpu: 100m
        memory: 1024Mi
    serviceAccountName: percona-server-mongodb-operator
    storages:
      prod-default:
        s3:
          bucket: juicefs-enc-sc
          credentialsSecret: prod-default-backup-s3
          endpointUrl: http://s3-juicefs-enc-sc.juicefs.svc.cluster.local:9000
          prefix: backup/mongo-pbm/dkyrii-dev1
          region: europe-west3
          uploadPartSize: 104857600
        type: s3
    tasks:
    - compressionLevel: 3
      compressionType: gzip
      enabled: true
      name: weekly-prod-default
      schedule: 0 6 * * 0
      storageName: prod-default
  clusterServiceDNSMode: Internal
  clusterServiceDNSSuffix: svc.cluster.local
  crVersion: 1.13.0
  image: prod.reg.io/percona-server-mongodb:6.0.3-2
  imagePullPolicy: IfNotPresent
  imagePullSecrets:
  - name: regcred
  multiCluster:
    enabled: false
  pause: false
  platform: kubernetes
  pmm:
    enabled: false
    image: percona/pmm-client:2.35.0
    serverHost: monitoring-service
  replsets:
  - affinity:
      advanced:
        nodeAffinity:
          preferredDuringSchedulingIgnoredDuringExecution:
          - preference:
              matchExpressions:
              - key: cloud.google.com/gke-nodepool
                operator: In
                values:
                - stateful-pool1
                - stateful-pool2
            weight: 50
          - preference:
              matchExpressions:
              - key: cloud.google.com/gke-nodepool
                operator: In
                values:
                - mongo-pool1
                - mongo-pool2
            weight: 100
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
            - matchExpressions:
              - key: cloud.google.com/gke-nodepool
                operator: In
                values:
                - mongo-pool1
                - mongo-pool2
                - stateful-pool1
                - stateful-pool2
        podAntiAffinity:
          preferredDuringSchedulingIgnoredDuringExecution:
          - podAffinityTerm:
              labelSelector:
                matchLabels:
                  app.kubernetes.io/component: mongod
                  app.kubernetes.io/instance: mongodb-psmdb-db
              topologyKey: kubernetes.io/hostname
            weight: 100
          - podAffinityTerm:
              labelSelector:
                matchLabels:
                  app.kubernetes.io/component: mongod
                  app.kubernetes.io/instance: mongodb-psmdb-db
              topologyKey: topology.kubernetes.io/zone
            weight: 100
      antiAffinityTopologyKey: kubernetes.io/hostname
    arbiter:
      affinity:
        advanced:
          nodeAffinity:
            preferredDuringSchedulingIgnoredDuringExecution:
            - preference:
                matchExpressions:
                - key: cloud.google.com/gke-nodepool
                  operator: In
                  values:
                  - stateful-pool1
                  - stateful-pool2
              weight: 50
            - preference:
                matchExpressions:
                - key: cloud.google.com/gke-nodepool
                  operator: In
                  values:
                  - mongo-pool1
                  - mongo-pool2
              weight: 100
            requiredDuringSchedulingIgnoredDuringExecution:
              nodeSelectorTerms:
              - matchExpressions:
                - key: cloud.google.com/gke-nodepool
                  operator: In
                  values:
                  - mongo-pool1
                  - mongo-pool2
                  - stateful-pool1
                  - stateful-pool2
          podAntiAffinity:
            preferredDuringSchedulingIgnoredDuringExecution:
            - podAffinityTerm:
                labelSelector:
                  matchLabels:
                    app.kubernetes.io/component: arbiter
                    app.kubernetes.io/instance: mongodb-psmdb-db
                topologyKey: kubernetes.io/hostname
              weight: 100
            - podAffinityTerm:
                labelSelector:
                  matchLabels:
                    app.kubernetes.io/component: arbiter
                    app.kubernetes.io/instance: mongodb-psmdb-db
                topologyKey: topology.kubernetes.io/zone
              weight: 100
        antiAffinityTopologyKey: kubernetes.io/hostname
      enabled: true
      size: 1
    expose:
      enabled: true
      exposeType: LoadBalancer
      loadBalancerSourceRanges:
      - 10.130.86.2/32
      serviceAnnotations:
        networking.gke.io/load-balancer-type: Internal
    name: rsmain
    podDisruptionBudget:
      maxUnavailable: 1
    resources:
      limits:
        cpu: 1000m
        memory: 2560Mi
      requests:
        cpu: 500m
        memory: 2560Mi
    size: 2
    volumeSpec:
      persistentVolumeClaim:
        accessModes:
        - ReadWriteOnce
        resources:
          requests:
            storage: 200Gi
        storageClassName: balanced-xfs-sc
  secrets:
    encryptionKey: mongodb-encryption-key
    users: mongodb-secrets
  sharding:
    configsvrReplSet:
      affinity:
        antiAffinityTopologyKey: kubernetes.io/hostname
      expose:
        enabled: false
        exposeType: ClusterIP
      podDisruptionBudget:
        maxUnavailable: 1
      resources:
        limits:
          cpu: 300m
          memory: 0.5G
        requests:
          cpu: 300m
          memory: 0.5G
      size: 3
      volumeSpec:
        persistentVolumeClaim:
          resources:
            requests:
              storage: 3Gi
    enabled: false
    mongos:
      affinity:
        antiAffinityTopologyKey: kubernetes.io/hostname
      expose:
        exposeType: ClusterIP
      podDisruptionBudget:
        maxUnavailable: 1
      resources:
        limits:
          cpu: 300m
          memory: 0.5G
        requests:
          cpu: 300m
          memory: 0.5G
      size: 2
  unmanaged: false
  updateStrategy: SmartUpdate
  upgradeOptions:
    apply: disabled
    schedule: 0 2 * * *
    setFCV: false
    versionServiceEndpoint: https://check.percona.com

I just want to confirm if you’re going to fix this. It’s quite hard to pass any certification like pcidss or soc2 without firewall rules for each service in cardholder network.

Hi @Denys_Kyrii,

Kindly add 2 spaces before “- 10.130.86.2/32” and try it. Kindly refer the below snippet.

expose:
  enabled: true
  exposeType: LoadBalancer
  loadBalancerSourceRanges:
    - 10.130.86.2/32

Regards,
Parag

I copied you the content of rendered object from kubernetes API.
Please check the result of command on your side replacing namespace and release name with your values and you’ll see the same:

kubectl get PerconaServerMongoDB -n mongodb mongodb-psmdb-db -o yaml

Do you have any updates here?

@Parag_Bhayani Please explain me according to your internal procedures how to assign the issue to someone?

Bug was already reported.
https://jira.percona.com/browse/K8SPSMDB-791