How do we add an AWS IRSA annotation to the backups to grant S3 access

Using the Percona XtraDB Helm Operator, running on AWS EKS with IRSA enabled.
How would we specify the Service Account used by the backup process so that the AWS credentials can be injected into the pod when the backup gets run?

Access to the S3 bucket is restricted, and to safely and securely grant access to the S3 bucket without injecting credentials into a Kubernetes secret (or granting the underlying worker node access to the S3 bucket) we want to use the IAM Roles for Service Accounts (IRSA) functionality of EKS.

This means that the backup process needs to be provided a ServiceAccount to use which has an annotation on it for an AWS role using:

eks.amazonaws.com/role-arn: <arn>

Having gone through the source of the Operator we’ve found a fair few undocumented features and being able to specify the service account for backups is one of these as well as the annotations on the actual storage defined.

Which is the correct approach to specify the service account to use for the backups.

1 Like

Hello @Gerwin_van_de_Steeg ,

to specify service account for backups you can use spec.backup.serviceAccountName variable in main CR.

It should be it. Please let me know if it helps.

1 Like

That “should” be it if you create the entire CRD yourself, which we’re not. We are using the Helm chart for the database to create the CRD for us. This helm chart does not expose:

spec.backup.serviceAccountName
nor any indication of the default for spec.automountServiceAccountToken for the Pods created
as well as the chart not functioning if you don’t specify spec.backup.storages..s3.credentials* (either the cred pair, or a secret containing them, neither option of which applies with IRSA).

So no, the Helm chart for the DB does not support IRSA.

1 Like

I missed that you used helm. I saw you created these two tickets:
https://jira.percona.com/browse/K8SPXC-770
https://jira.percona.com/browse/K8SPXC-771

We’ll see what can be done. Would you be able to push the PR?

1 Like

HI,

I’ve not created any PR’s for this, don’t have the available time at the moment (nor will likely in the forsee-able few months). I’ve lodged those tickets so that someone else in a similar situation can find it without having to dig through C++ and Go code or Helm charts. Might circle around back to this when I can.

Cheers,
Gerwin

1 Like

I’m trying to do the equivalent with Workload Identity. I was expecting that adding the relevant annotations would suffice, but the backup configuration keys still expects a secret.

{"level":"error","ts":1665652704.1108687,"logger":"controller.perconaxtradbclusterbackup-controller","msg":"Reconciler error","name":"my-test-backup","namespace":"my-app","error":"create backup job: Job.batch \"xb-my-test-backup\" is invalid: [spec.template.spec.containers[0].env[4].valueFrom.secretKeyRef.name: Invalid value: \"\": a lowercase RFC 1123 subdomain must consist of lower case alphanumeric characters, '-' or '.', and must start and end with an alphanumeric character (e.g. 'example.com', regex used for validation is '[a-z0-9]([-a-z0-9]*[a-z0-9])?(\\.[a-z0-9]([-a-z0-9]*[a-z0-9])?)*'),

1 Like

Created this ticket to fix it:
https://jira.percona.com/browse/K8SPXC-1114
I can make a PR if what I’m saying makes sense to you all :grinning:

1 Like

The problem is a bit bigger here.
Our underlying software - XtraBackup and xbcloud do not support AWS IAM roles for now.

We have the following tickets to cover this:
https://jira.percona.com/browse/PXB-1882
https://jira.percona.com/browse/PXB-2856

Once we have it implemented, we will add this functionality into the Operator.

1 Like

I’m revisiting the Percona material here and those tickets seem to be completed, how far has this gone to make it workable? It’s been 2 years, and my current exploration seems to indicate this still doesn’t actually work.

$ kubectl get pod/xb-cron-mysql-pxc-db-sql-20241060610-3ldds-qwshg -o yaml
apiVersion: v1
kind: Pod
metadata:
  annotations:
    eks.amazonaws.com/role-arn: arn:aws:iam::1234567678:role/some-irsa-sql-s3-backup-rolename
  creationTimestamp: "2024-10-06T00:12:58Z"
...
status:
  conditions:
  - lastProbeTime: null
    lastTransitionTime: "2024-10-06T00:12:59Z"
    status: "True"
    type: Initialized
  - lastProbeTime: null
    lastTransitionTime: "2024-10-06T00:13:18Z"
    reason: PodFailed
    status: "False"
    type: Ready
...
  phase: Failed

Relevant configuration

    backup:
      enabled: true
      # image:
      #   tag: 1.15.0-pxc8.0-backup-pxb8.0.35-debug
      serviceAccountName: sql-db-backup-serviceaccount
      pitr:
        enabled: true
        storageName: binlogs
        # time in seconds between uploads
        timeBetweenUploads: 300
      storages:
        # the normal backups
        sql:
          type: s3
          annotations:
            eks.amazonaws.com/role-arn: arn:aws:iam::1234567678:role/some-irsa-sql-s3-backup-rolename
          s3:
            bucket: example-bucket-database-backups/sql/db1/
            ## with secret specified it does not work
            # credentialsSecret: example-sql-backup-aws-credentials
            # credentialsAccessKey: ""
            # credentialsSecretKey: ""
            region: us-west-2
        # the pitr binlogs for quick restore/replay
        binlogs:
          type: s3
          annotations:
            eks.amazonaws.com/role-arn: arn:aws:iam::1234567678:role/some-irsa-sql-s3-backup-rolename
          s3:
            bucket: example-bucket-database-backups/sql/pitr/
            ## with secret specified it does not work
            # credentialsSecret: example-sql-backup-aws-credentials
            # credentialsAccessKey: ""
            # credentialsSecretKey: ""
            region: us-west-2