AWS Load balancers are not removed after switching from expose LoadBalancer to ClusterIP

Description:

I have created MongoDB sharded cluster and exposed nodes using the following configuration in the cr.yaml:

expose:
  enabled: true
  exposeType: LoadBalancer

To expose nodes I am using aws-loadbalancer-controller and Network Load balancers.
When I change exposeType from LoadBalancer to ClusterIP I can see that the replicas kubernetes service type has been changed from LoadBalancer to ClusterIP. But when I check in the AWS console I can see that Load balancers still exist and are not removed.
Is this normal behavior or is something wrong with my environment?
I expect that Load balancers should be removed automatically.
By the way, I did also such testing, when I change the service type from LoadBalancer to ClusterIP directly on Kubernetes service manually, AWS Load balancers are removed automatically.

When I expose nodes using in-tree loadbalancer controller and Classic Load balancers, Load balancers are removed automatically in this case as I think is expected.

Version:

psmdb operator version 1.13
AWS EKS 1.23

Let me give more information and clarification on this problem.
I have created MongoDB sharded cluster using this custom resource configuration:

apiVersion: psmdb.percona.com/v1
kind: PerconaServerMongoDB
metadata:
  name: test
  namespace: psmdb-operator
spec:
  crVersion: 1.13.0
  image: percona/percona-server-mongodb:4.4.13
  allowUnsafeConfigurations: true
  updateStrategy: RollingUpdate
  replsets:
  - name: rs0
    size: 3
    volumeSpec:
      persistentVolumeClaim:
        resources:
          requests:
            storage: 3Gi
    expose:
      enabled: true
      exposeType: LoadBalancer
      serviceAnnotations:
        "service.beta.kubernetes.io/aws-load-balancer-scheme": "internal"
        "service.beta.kubernetes.io/aws-load-balancer-type": "external"
        "service.beta.kubernetes.io/aws-load-balancer-nlb-target-type": "ip"
  sharding:
    enabled: true
    configsvrReplSet:
      size: 3
      volumeSpec:
        persistentVolumeClaim:
          resources:
            requests:
              storage: 3Gi
      expose:
        enabled: true
        exposeType: LoadBalancer
        serviceAnnotations:
          "service.beta.kubernetes.io/aws-load-balancer-scheme": "internal"
          "service.beta.kubernetes.io/aws-load-balancer-type": "external"
          "service.beta.kubernetes.io/aws-load-balancer-nlb-target-type": "ip"
    mongos:
      size: 1

psmdb operator created LoadBalancer services:

After changing in custom resource exposeType from LoadBalancer to ClusterIP, psmdb operator changed the replicas services type from LoadBalancer to ClusterIP accordingly:

But when checking in the AWS console we can see that Load balancers exist and are not removed.

In the same case but if I change the service type from LoadBalancer to ClusterIP directly on the replicas services by hand we can see in AWS console that Load balancers have been removed.

I expect that Load balancers should be removed automatically in AWS console in the case when psmdb operator changes the service type, but this doesn’t happen. What can be the reason for this?

Hi @kamuna !
To me this looks like a following issue: Updating Service from LoadBalancer to ClusterIp is resulting with orphaned NLB in AWS · Issue #95042 · kubernetes/kubernetes · GitHub
If true maybe try to leave a comment/vote there.

On the other hand the possible workaround from operator side would maybe be if there is such a change from LB->ClusterIP to not just change the service, but instead delete and recreate, because if I’m not wrong in that case load balancer in the cloud would be deleted.

Hi @Tomislav_Plavcic, thanks for the response!
Your referenced issue is related to the legacy in-tree AWS cloud provider and not to AWS Load balancer controller
https://kubernetes-sigs.github.io/aws-load-balancer-controller/v2.6/guide/service/annotations/#legacy-cloud-provider

The referenced issue is quite outdated, as I mentioned in my first post, this issue does not exist with the legacy in-tree AWS cloud provider, it exists when AWS Load Balancer Controller manages Kubernetes Services.
I have opened the corresponding issue at the AWS Load Balancer Controller project but there have been no results so far.

I would appreciate it if you could provide more information on this question:

It looks like the reason for the issue is that psmdb operator is deleting the .metadata.finalizers section of the managed Service object when changing exposeType: LoadBalancer to ClusterIP, and aws-load-balancer-controller does not know that it has to remove the Load Balancer and other AWS resources. The aws-load-balancer-controller must remove the finalizers section after cleaning up AWS resources. psmdb operator should not do this.

Here is how one of the Kubernetes services created with psmdb operator looks like before updating the exposeType: LoadBalancer → ClusterIP

apiVersion: v1
kind: Service
metadata:
  annotations:
    percona.com/last-config-hash: ...
    service.beta.kubernetes.io/aws-load-balancer-nlb-target-type: ip
    service.beta.kubernetes.io/aws-load-balancer-scheme: internal
    service.beta.kubernetes.io/aws-load-balancer-type: external
  creationTimestamp: "2023-08-18T09:18:43Z"
  finalizers:
  - service.kubernetes.io/load-balancer-cleanup
  - service.k8s.aws/resources
  labels:
    app.kubernetes.io/component: external-service
    app.kubernetes.io/instance: test
    app.kubernetes.io/managed-by: percona-server-mongodb-operator
    app.kubernetes.io/name: percona-server-mongodb
    app.kubernetes.io/part-of: percona-server-mongodb
    app.kubernetes.io/replset: rs0
  name: test-rs0-0
  namespace: psmdb-operator
  ownerReferences:
  - apiVersion: psmdb.percona.com/v1
    controller: true
    kind: PerconaServerMongoDB
    name: test
    uid: b4338f1c-4d8d-4303-bf54-8eab10bce0ff
  resourceVersion: "76837744"
  uid: 861fa87c-c98b-43f1-a723-44edbbb7ebce
spec:
  allocateLoadBalancerNodePorts: true
  clusterIP: 172.20.92.153
  clusterIPs:
  - 172.20.92.153
  externalTrafficPolicy: Cluster
  internalTrafficPolicy: Cluster
  ipFamilies:
  - IPv4
  ipFamilyPolicy: SingleStack
  ports:
  - name: mongodb
    nodePort: 31382
    port: 27017
    protocol: TCP
    targetPort: 27017
  selector:
    statefulset.kubernetes.io/pod-name: test-rs0-0
  sessionAffinity: None
  type: LoadBalancer
status:
  loadBalancer:
    ingress:
    - hostname: k8s-psmdbope-testrs00-923650b8a7-2a7928ee7e46352a.elb.us-east-1.amazonaws.com

This service after updating the exposeType: LoadBalancer → ClusterIP

apiVersion: v1
kind: Service
metadata:
  annotations:
    percona.com/last-config-hash: ...
    service.beta.kubernetes.io/aws-load-balancer-nlb-target-type: ip
    service.beta.kubernetes.io/aws-load-balancer-scheme: internal
    service.beta.kubernetes.io/aws-load-balancer-type: external
  creationTimestamp: "2023-08-18T09:18:43Z"
  labels:
    app.kubernetes.io/component: external-service
    app.kubernetes.io/instance: test
    app.kubernetes.io/managed-by: percona-server-mongodb-operator
    app.kubernetes.io/name: percona-server-mongodb
    app.kubernetes.io/part-of: percona-server-mongodb
    app.kubernetes.io/replset: rs0
  name: test-rs0-0
  namespace: psmdb-operator
  ownerReferences:
  - apiVersion: psmdb.percona.com/v1
    controller: true
    kind: PerconaServerMongoDB
    name: test
    uid: b4338f1c-4d8d-4303-bf54-8eab10bce0ff
  resourceVersion: "76843563"
  uid: 861fa87c-c98b-43f1-a723-44edbbb7ebce
spec:
  clusterIP: 172.20.92.153
  clusterIPs:
  - 172.20.92.153
  internalTrafficPolicy: Cluster
  ipFamilies:
  - IPv4
  ipFamilyPolicy: SingleStack
  ports:
  - name: mongodb
    port: 27017
    protocol: TCP
    targetPort: 27017
  selector:
    statefulset.kubernetes.io/pod-name: test-rs0-0
  sessionAffinity: None
  type: ClusterIP
status:
  loadBalancer: {}

As we can see the finalizers section disappeared from the service object.
However, if to check in the AWS console, AWS Load Balancer and other AWS resources still exist.