Percona Operator XtraDB Backup Cron to Minio Fails to add credentials

Hello all,

I’m trying to get the operator’s backup function to work. But it appears to be something in the container. I’ve inspected the backup.sh inside the backup container and it appears that the adding of credentials is not working. I have tested it from other machines and containers with the same credentials and my account settings do work.

Any advice or assistance would be appreciated.

Zach

1 Like

Hello @Zach ,

the error is returned by mc (Minio Client). It says Access Denied, which usually means that credentials are wrong.

From restore-backup.sh I can see that we execute the following command:

mc -C /tmp/mc config host add dest “${ENDPOINT:-<a href=“https://s3.amazonaws.com}””>https://s3.amazonaws.com}" “$ACCESS_KEY_ID” “$SECRET_ACCESS_KEY”

And in hereI see that ACCESS_KEY_ID and key are defined based on what you have in the secrets:

accessKey := corev1.EnvVar{
		Name: "ACCESS_KEY_ID",
		ValueFrom: &corev1.EnvVarSource{
			SecretKeyRef: app.SecretKeySelector(s3.CredentialsSecret, "AWS_ACCESS_KEY_ID"),
		},
	}

Do I understand correctly that if you execute mc command manually with correct keys - it works fine, but it does not work in the operator?

If so, please double check if the keys are set correctly in your secrets file (like backup-s3.yaml).

I’ve discovered the issue which if you are using minio you need to specify the API on the minio client command “–api S3v4” for the add credentials to work. Could this be a fix on the backup script?

I tested this manually from inside the container and it works only when applying the API parameter

Zach

adding this to the startup command for the backup container allows the backup to happen

sed ‘s/"$SECRET_ACCESS_KEY/"$SECRET_ACCESS_KEY"\ --api\ S3v4/’ /usr/bin/backup.sh | bash

Hello @Zach,

thank you for submitting details. Was it you or someone from your team who created this ticket:[K8SPXC-570] Minio client in backup image does not mount S3-compatible storage - Percona JIRA ?

As you see in the comments there we have tested the latest minio client and it works. We will release PXC Operator 1.7.0 in January with the latest client. Please stay tuned.

2 Likes

That is not me, but that is legit the issue :smiley: Thank you!

1 Like

Hi, I experiencing the same issue.
With percona 1.7.0 and with 1.8.0 as well.

2021-04-06 10:22:04.219  INFO: [SST script] Backup to s3://XXXXXX/cluster1-2021-04-06-10:21:47-full started
2021-04-06 10:22:04.219  INFO: [SST script] + '[' -n XXXXXX ']'
2021-04-06 10:22:04.219  INFO: [SST script] + backup_s3
2021-04-06 10:22:04.219  INFO: [SST script] + S3_BUCKET_PATH=cluster1-2021-04-06-10:21:47-full
2021-04-06 10:22:04.219  INFO: [SST script] + mc -C /tmp/mc config host add dest https://s3.amazonaws.com ACCESS_KEY_ID SECRET_ACCESS_KEY
2021-04-06 10:22:04.219  INFO: [SST script] + echo 'Backup to s3://XXXXXX/cluster1-2021-04-06-10:21:47-full started'
2021-04-06 10:22:05.604  INFO: [SST script] mc: <ERROR> Unable to initialize new alias from the provided credentials. 400 Bad Request.

Here if I launch minio commands from inside the backup pod
with --api s3v2 or s3v4 (the command works)

bash-4.4$ mc C /tmp/mc config host add dest “${ENDPOINT:https://s3.amazonaws.com}” $ACCESS_KEY_ID $SECRET_ACCESS_KEY --api s3v2
Added dest successfully.
bash-4.4$ mc C /tmp/mc config host add dest “${ENDPOINT:https://s3 .amazonaws.com}” $ACCESS_KEY_ID $SECRET_ACCESS_KEY --api s3v4
Added dest successfully.

and without --api option (as the backup script does - but it doesn’t work)

bash-4.4$ mc C /tmp/mc config host add dest “${ENDPOINT:https ://s3.amazonaws.com}” $ACCESS_KEY_ID $SECRET_ACCESS_KEY
mc: Unable to initialize new alias from the provided credentials. 400 Bad Request.

Is there any tested solution to this?
thank you very much

Fabio

1 Like

@fabi8ne could you please share your cr.yaml?
I see that this PR was merged: K8SPXC-570 update 'mc' to RELEASE.2020-12-18T10-53-53Z by delgod · Pull Request #389 · percona/percona-docker · GitHub
And if you use 1.7.0 backup image - you should not have the issue.

  backup:
     image: percona/percona-xtradb-cluster-operator:1.7.0-pxc8.0-backup

Also 1.8.0 was not released yet. When you say that you try on 1.8.0 - what do you mean?

1 Like

Hi, I just cloned the release-1.8.0 branch to get the 1.8.0 verison. But now I’m not sure I did well.
Anyway i get the issue with 1.7.0 … here my cr.yaml.txt

no… as a new user I cannot upload files…

  apiVersion: pxc.percona.com/v1-8-0
kind: PerconaXtraDBCluster
metadata:
  name: cluster1
  finalizers:
    - delete-pxc-pods-in-order
spec:
  crVersion: 1.8.0
  secretsName: my-cluster-secrets
  vaultSecretName: keyring-secret-vault
  sslSecretName: my-cluster-ssl
  sslInternalSecretName: my-cluster-ssl-internal
  logCollectorSecretName: my-log-collector-secrets
  allowUnsafeConfigurations: false
  updateStrategy: SmartUpdate
  upgradeOptions:
    versionServiceEndpoint: https://check.percona.com
    apply: 8.0-recommended
    schedule: "0 4 * * *"
  pxc:
    size: 3
    image: percona/percona-xtradb-cluster:8.0.22-13.1
    autoRecovery: true
    resources:
      requests:
        memory: 1G
        cpu: 600m
    affinity:
      antiAffinityTopologyKey: "kubernetes.io/hostname"
    podDisruptionBudget:
      maxUnavailable: 1
    volumeSpec:
      persistentVolumeClaim:
        storageClassName: 3parsvil
        resources:
          requests:
            storage: 6G
    gracePeriod: 600
  haproxy:
    enabled: true
    size: 3
    image: percona/percona-xtradb-cluster-operator:1.8.0-haproxy
    resources:
      requests:
        memory: 1G
        cpu: 600m
    affinity:
      antiAffinityTopologyKey: "kubernetes.io/hostname"
    podDisruptionBudget:
      maxUnavailable: 1
    gracePeriod: 30
  proxysql:
    enabled: false
    size: 3
    image: percona/percona-xtradb-cluster-operator:1.8.0-proxysql
    resources:
      requests:
        memory: 1G
        cpu: 600m
    affinity:
      antiAffinityTopologyKey: "kubernetes.io/hostname"
    volumeSpec:
      persistentVolumeClaim:
        storageClassName: 3parsvil
        resources:
          requests:
            storage: 2G
    podDisruptionBudget:
      maxUnavailable: 1
    gracePeriod: 30
  logcollector:
    enabled: true
    image: percona/percona-xtradb-cluster-operator:1.8.0-logcollector
  pmm:
    enabled: false
    image: percona/pmm-client:2.12.0
    serverHost: monitoring-service
    serverUser: admin
  backup:
    image: percona/percona-xtradb-cluster-operator:1.8.0-pxc8.0-backup
    pitr:
      enabled: false
      storageName: STORAGE-NAME-HERE
      timeBetweenUploads: 60
    storages:
      s3-eu-west-1:
        type: s3
        s3:
          bucket: XXXxxxXXXxxxXXX
          credentialsSecret: my-cluster-name-backup-s3
          region: eu-west-1
    schedule:
      - name: "daily-s3-backup"
        schedule: "55 08 * * *"
        keep: 3
        storageName: s3-eu-west-1
1 Like

and here my cr.yaml 1.7.0 version.
as you can see I use the same image you suggest for backup

apiVersion: pxc.percona.com/v1-7-0
kind: PerconaXtraDBCluster
metadata:
  name: cluster02
  finalizers:
    - delete-pxc-pods-in-order
spec:
  crVersion: 1.7.0
  secretsName: my-cluster-secrets
  vaultSecretName: keyring-secret-vault
  sslSecretName: my-cluster-ssl
  sslInternalSecretName: my-cluster-ssl-internal
  logCollectorSecretName: my-log-collector-secrets
  allowUnsafeConfigurations: false
  pause: false
  updateStrategy: SmartUpdate
  upgradeOptions:
    versionServiceEndpoint: https://check.percona.com
    apply: Disabled
    schedule: "0 4 * * *"
  pxc:
    size: 3
    image: percona/percona-xtradb-cluster:8.0.21-12.1
    autoRecovery: true
    resources:
      requests:
        memory: 1G
        cpu: 650m
    affinity:
      antiAffinityTopologyKey: "kubernetes.io/hostname"
    podDisruptionBudget:
      maxUnavailable: 1
    volumeSpec:
      persistentVolumeClaim:
        storageClassName: 3parsvil
        resources:
          requests:
            storage: 6Gi
    gracePeriod: 600
  haproxy:
    enabled: true
    size: 3
    image: percona/percona-xtradb-cluster-operator:1.7.0-haproxy
    resources:
      requests:
        memory: 1G
        cpu: 600m
    affinity:
      antiAffinityTopologyKey: "kubernetes.io/hostname"
    podDisruptionBudget:
      maxUnavailable: 1
    gracePeriod: 30
  proxysql:
    enabled: false
    size: 3
    image: percona/percona-xtradb-cluster-operator:1.7.0-proxysql
    resources:
      requests:
        memory: 1G
        cpu: 600m
    affinity:
      antiAffinityTopologyKey: "kubernetes.io/hostname"
    volumeSpec:
      persistentVolumeClaim:
        storageClassName: 3parsvil
        resources:
          requests:
            storage: 2Gi
    podDisruptionBudget:
      maxUnavailable: 1
    gracePeriod: 30
  logcollector:
    enabled: true
    image: percona/percona-xtradb-cluster-operator:1.7.0-logcollector
  pmm:
    enabled: false
    image: percona/pmm-client:2.12.0
    serverHost: monitoring-service
    serverUser: pmm
  backup:
    image: percona/percona-xtradb-cluster-operator:1.7.0-pxc8.0-backup
    pitr:
      enabled: false
      storageName: STORAGE-NAME-HERE
      timeBetweenUploads: 60
    storages:
      s3-eu-west-1:
        type: s3
        s3:
          bucket: xxxXXXxxxXXX
          credentialsSecret: my-cluster-name-backup-s3
          region: eu-west-1
    schedule:
      - name: "daily-s3-backup"
        schedule: "55 08 * * *"
        keep: 3
        storageName: s3-eu-west-1
2 Likes

Greetings, we also got hit by this issue while trying to create backups, we are on 1.7.0 release which uses percona/percona-xtradb-cluster-operator:1.7.0-pxc8.0-backup image.

Appreciate if someone could share a workaround, so that I can continue with my testings. Thanks!

1 Like

Hi @fabi8ne and @dixo ,

I have tested your scenario using PXCO 1.7.0 and the same aws region and unfortunately I can not reproduce the issue:
CR:

  backup:
    image: percona/percona-xtradb-cluster-operator:1.7.0-pxc8.0-backup
#    serviceAccountName: percona-xtradb-cluster-operator
#    imagePullSecrets:
#      - name: private-registry-credentials
    pitr:
      enabled: false
      storageName: STORAGE-NAME-HERE
      timeBetweenUploads: 60
    storages:
      s3-eu-west-1:
        type: s3
        s3:
          bucket: *****
          credentialsSecret: aws-s3-secret
          region: eu-west-1
    schedule:
      - name: "daily-s3-backup"
        schedule: "00 14 * * *"
        keep: 3
        storageName: s3-eu-west-1

log:

2021-04-16 14:00:09.864  INFO: [SST script] + backup_s3
2021-04-16 14:00:09.865  INFO: [SST script] + S3_BUCKET_PATH=cluster1-2021-04-16-14:00:05-full
2021-04-16 14:00:09.865  INFO: [SST script] + echo 'Backup to s3://******/cluster1-2021-04-16-14:00:05-full started'
2021-04-16 14:00:09.865  INFO: [SST script] Backup to s3://******/cluster1-2021-04-16-14:00:05-full started
2021-04-16 14:00:09.865  INFO: [SST script] + mc -C /tmp/mc config host add dest https://s3.amazonaws.com ACCESS_KEY_ID SECRET_ACCESS_KEY
2021-04-16 14:00:10.348  INFO: [SST script] Added `dest` successfully. 

As you can see when you add --api key , mc does not call the validation call path: `mc config host add` can have an option to skip the credentials verify. · Issue #2422 · minio/mc · GitHub So, it is not connected with mc version.
You need to check your s3 IAM policies, maybe you do not have enough permission to your bucket or you have some specific configuration of your bucket.
I need to have more information from your end (e.g. example of your s3 IAM policies) to reproduce it.

1 Like