Mongodb automatic backup not working using aws serviecaccount


I am trying to setup the automatic backup using serviceaccount without passing credentialsecrets. The backup doesn’t suppose to be working.

Steps to Reproduce:

Deploy the percoanmongodb using helm chart and try configuring backup only using serviceaccount.

Below given is my backup section of the values.yaml I used →

  enabled: true
    repository: percona/percona-backup-mongodb
    tag: 2.3.0
  serviceAccountName: percona-server-mongodb-operator
  annotations: arn:aws:iam::645193536862:role/prod-percona-s3-role
  # podSecurityContext: {}
  # containerSecurityContext: {}
  # resources:
  #   limits:
  #     cpu: "300m"
  #     memory: "0.5G"
  #   requests:
  #     cpu: "300m"
  #     memory: "0.5G"
       type: s3
         bucket: "ds-mongodb-backup"
         credentialsSecret: percona-backup-secret
         region: us-east-1
         prefix: "perconabackup"
#         uploadPartSize: 10485760
#         maxUploadParts: 10000
         storageClass: STANDARD
         insecureSkipTLSVerify: true
    # minio:
    #   type: s3
    #   s3:
    #     region: us-east-1
    #     credentialsSecret: my-cluster-name-backup-minio
    #     endpointUrl: http://minio.psmdb.svc.cluster.local:9000/minio/
    #     prefix: ""
    #   azure-blob:
    #     type: azure
    #     azure:
    #       container: CONTAINER-NAME
    #       prefix: PREFIX-NAME
    #       credentialsSecret: SECRET-NAME
    enabled: false
    oplogOnly: false
    # oplogSpanMin: 10
    # compressionType: gzip
    # compressionLevel: 6
   - name: mongobackup_night
     enabled: true
     schedule: "30 23 * * *"
     keep: 5
     storageName: backupstorage
     compressionType: gzip
     type: logical
   - name: mongobackup_day
     enabled: true
     schedule: "0 12 * * 0"
     keep: 5
     storageName: backupstorage
     compressionType: gzip
     type: logical


[Insert the version number of the software]


The error I am getting in my logs is this →

[agentCheckup] check storage connection: storage check failed with: get S3 object header: Forbidden: Forbidden
                               	status code: 403, request id: XMW9A46YZQR1D6RH, host id: hIfOWJwRb6E2jxlfURSgKv9+gPl0F3fnIIeDg/qBjIFjlCyorRt5EvOTU1vxOa7sv/JfiSPHfLg=

Expected Result:

The automatic backup should get initiated

Actual Result:

There is permission error without credential secret.

Additional Information:

[Include any additional information that could be helpful to diagnose the issue, such as browser or device information]

Hello @Vatsal_Sharma ,

according to our docs you can use service account in the following way:

Following steps are needed to turn this feature on:

  • Create the IAM instance profile and the permission policy within where you specify the access level that grants the access to S3 buckets.
  • Attach the IAM profile to an EC2 instance.
  • Configure an S3 storage bucket and verify the connection from the EC2 instance to it.
  • Do not provide s3.credentialsSecret for the storage in deploy/cr.yaml.

So in a nutshell, if you already have a role attached to an EC2 instance, in Operator you may just skip the credentialsSecret section to have it working.

Hey @Sergey_Pronin we have an eks cluster , where I have provided the required access to my serviceaccounts (both operator and server) by annotating it with the required role, and when we omit the credential secert then it starts failing and backup gives error.

here adding the access to ec2 instance by that do we mean to provide the access to the nodegrouproles in my cluster?