Sharding with aws IAM auth

Hey,
I started to use percona mongodb recently, I deployed everything on a EKS cluster, with helm charts (argocd behind). I currently have a sharded mongodb instance with 2 mongos, 3 confidb, and two shards with one instance each (test environment), I want my services and users to authenticate using aws IAM roles, I’ve tried two things:

First I tried to setup aws iam auth like said in the percona documentation, it doesn’t say anything about sharded instances so I assumed I had to set the configuration on my mongos. I got a first error saying that security.authorization is not authorized on mongos, so I removed it, and then the pod was never ready because the command listDatabase requires authentication.

So then I removed all configuration and tried to login anyway, but I got the error Error: connect EHOSTUNREACH 169.254.169.254:80. I’ve searched for this error but I found nothing related to percona mongodb, and nothing related to what issue I could have. I think I am on the right path tho.

Has anyone encountered this kind of error ? Or does anyone has experience with this kind of setup ?
Thank you very much, if you need extra configuration files or log files I will provide them. Thanks !

Hey @Mike_Devresse ,

are you using the Operator? Cause you created the question in Percona Server for MongoDB and I’m not sure if it is.

If it is the Operator - could you pls share your helm values manifest?

In a nutshell it should work within the Operator’s controlled DBs.

allowUnsafeConfigurations: true
          replsets:
            - name: rs0
              size: 1
              affinity:
                antiAffinityTopologyKey: "kubernetes.io/hostname"
              volumeSpec:
                pvc:
                  resources:
                    requests:
                      storage: 5Gi
              resources:
                requests:
                  memory: 512Mi
                  cpu: 150m
                limits:
                  memory: 512Mi
            - name: rs1
              size: 1
              affinity:
                antiAffinityTopologyKey: "kubernetes.io/hostname"
              volumeSpec:
                pvc:
                  resources:
                    requests:
                      storage: 5Gi
              resources:
                requests:
                  memory: 512Mi
                  cpu: 150m
                limits:
                  memory: 512Mi
          sharding:
            enabled: true
            balancer:
              enabled: true
            configrs:
              size: 3
              affinity:
                antiAffinityTopologyKey: "kubernetes.io/hostname"
              resources:
                limits:
                  cpu: 100m
                  memory: 300Mi
                requests:
                  cpu: 100m
                  memory: 300Mi
              volumeSpec:
                pvc:
                  resources:
                    requests:
                      storage: 5Gi
            mongos:
              size: 2
              configuration: |
                setParameter:
                  awsStsHost: "sts.eu-west-3.amazonaws.com"
              affinity:
                antiAffinityTopologyKey: "kubernetes.io/hostname"
              resources:
                requests:
                  cpu: 100m
                  memory: 200Mi
                limits:
                  memory: 200Mi
              expose:
                exposeType: NodePort
          backup:
            enabled: false
            storages: {}
            tasks: []

As I said I did not set the configuration for aws auth because otherwise my cluster doesn’t work at all because the mongos, configdb and shards can’t auth between them. It is managed by the operator yes, is there a way to use helm and the aws auth ?